Sample records for array camera red

  1. C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors

    NASA Astrophysics Data System (ADS)

    Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David

    2018-02-01

    After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.

  2. Linear CCD attitude measurement system based on the identification of the auxiliary array CCD

    NASA Astrophysics Data System (ADS)

    Hu, Yinghui; Yuan, Feng; Li, Kai; Wang, Yan

    2015-10-01

    Object to the high precision flying target attitude measurement issues of a large space and large field of view, comparing existing measurement methods, the idea is proposed of using two array CCD to assist in identifying the three linear CCD with multi-cooperative target attitude measurement system, and to address the existing nonlinear system errors and calibration parameters and more problems with nine linear CCD spectroscopic test system of too complicated constraints among camera position caused by excessive. The mathematical model of binocular vision and three linear CCD test system are established, co-spot composition triangle utilize three red LED position light, three points' coordinates are given in advance by Cooperate Measuring Machine, the red LED in the composition of the three sides of a triangle adds three blue LED light points as an auxiliary, so that array CCD is easier to identify three red LED light points, and linear CCD camera is installed of a red filter to filter out the blue LED light points while reducing stray light. Using array CCD to measure the spot, identifying and calculating the spatial coordinates solutions of red LED light points, while utilizing linear CCD to measure three red LED spot for solving linear CCD test system, which can be drawn from 27 solution. Measured with array CCD coordinates auxiliary linear CCD has achieved spot identification, and has solved the difficult problems of multi-objective linear CCD identification. Unique combination of linear CCD imaging features, linear CCD special cylindrical lens system is developed using telecentric optical design, the energy center of the spot position in the depth range of convergence in the direction is perpendicular to the optical axis of the small changes ensuring highprecision image quality, and the entire test system improves spatial object attitude measurement speed and precision.

  3. C-RED One : the infrared camera using the Saphira e-APD detector

    NASA Astrophysics Data System (ADS)

    Greffe, Timothée.; Feautrier, Philippe; Gach, Jean-Luc; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Boutolleau, David; Baker, Ian

    2016-08-01

    Name for Person Card: Observatoire de la Côte d'Azur First Light Imaging' C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a sub-electron readout noise and very low background. This breakthrough has been made possible thanks to the use of an e- APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.

  4. C-RED one: ultra-high speed wavefront sensing in the infrared made possible

    NASA Astrophysics Data System (ADS)

    Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian

    2016-07-01

    First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.

  5. Stellar Snowflake Cluster

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1 Stellar Snowflake Cluster Combined Image [figure removed for brevity, see original site] Figure 2 Infrared Array CameraFigure 3 Multiband Imaging Photometer

    Newborn stars, hidden behind thick dust, are revealed in this image of a section of the Christmas Tree cluster from NASA's Spitzer Space Telescope, created in joint effort between Spitzer's infrared array camera and multiband imaging photometer instruments.

    The newly revealed infant stars appear as pink and red specks toward the center of the combined image (fig. 1). The stars appear to have formed in regularly spaced intervals along linear structures in a configuration that resembles the spokes of a wheel or the pattern of a snowflake. Hence, astronomers have nicknamed this the 'Snowflake' cluster.

    Star-forming clouds like this one are dynamic and evolving structures. Since the stars trace the straight line pattern of spokes of a wheel, scientists believe that these are newborn stars, or 'protostars.' At a mere 100,000 years old, these infant structures have yet to 'crawl' away from their location of birth. Over time, the natural drifting motions of each star will break this order, and the snowflake design will be no more.

    While most of the visible-light stars that give the Christmas Tree cluster its name and triangular shape do not shine brightly in Spitzer's infrared eyes, all of the stars forming from this dusty cloud are considered part of the cluster.

    Like a dusty cosmic finger pointing up to the newborn clusters, Spitzer also illuminates the optically dark and dense Cone nebula, the tip of which can be seen towards the bottom left corner of each image.

    This combined image shows the presence of organic molecules mixed with dust as wisps of green, which have been illuminated by nearby star formation. The larger yellowish dots neighboring the baby red stars in the Snowflake Cluster are massive stellar infants forming from the same cloud. The blue dots sprinkled across the image represent older Milky Way stars at various distances along this line of sight. This image is a five-channel, false-color composite, showing emission from wavelengths of 3.6 and 4.5 microns (blue), 5.8 microns (cyan), 8 microns (green), and 24 microns (red).

    The top right (fig. 2) image from the infrared array camera show that the nebula is still actively forming stars. The wisps of red (represented as green in the combined image) are organic molecules mixed with dust, which has been illuminated by nearby star formation. The infrared array camera picture is a four-channel, false-color composite, showing emission from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8.0 microns (red).

    The bottom right image (fig. 3) from the multiband imaging photometer shows the colder dust of the nebula and unwraps the youngest stellar babies from their dusty covering. This is a false-color image showing emission at 24 microns (red).

  6. Pre-discovery detections and progenitor candidate for SPIRITS17qm in NGC 1365

    NASA Astrophysics Data System (ADS)

    Jencson, J. E.; Bond, H. E.; Adams, S. M.; Kasliwal, M. M.

    2018-04-01

    We report the detection of a pre-discovery outburst of SPIRITS17qm, discovered as part of the ongoing Spitzer InfraRed Intensive Transients Survey (SPIRITS) using the 3.6 and 4.5 micron imaging channels ([3.6] and [4.5]) of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope (ATel #11575).

  7. Pre-discovery detections and progenitor candidate for SPIRITS17pc in NGC 4388

    NASA Astrophysics Data System (ADS)

    Jencson, J. E.; Bond, H. E.; Adams, S. M.; Kasliwal, M. M.

    2018-04-01

    We report detections of pre-discovery outbursts of SPIRITS17pc, discovered as part of the ongoing Spitzer InfraRed Intensive Transients Survey (SPIRITS) using the 3.6 and 4.5 micron imaging channels ([3.6] and [4.5]) of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope (ATel #11575).

  8. Smartphone-Based VOC Sensor Using Colorimetric Polydiacetylenes.

    PubMed

    Park, Dong-Hoon; Heo, Jung-Moo; Jeong, Woomin; Yoo, Young Hyuk; Park, Bum Jun; Kim, Jong-Man

    2018-02-07

    Owing to a unique colorimetric (typically blue-to-red) feature upon environmental stimulation, polydiacetylenes (PDAs) have been actively employed in chemosensor systems. We developed a highly accurate and simple volatile organic compound (VOC) sensor system that can be operated using a conventional smartphone. The procedure begins with forming an array of four different PDAs on conventional paper using inkjet printing of four corresponding diacetylenes followed by photopolymerization. A database of color changes (i.e., red and hue values) is then constructed on the basis of different solvatochromic responses of the 4 PDAs to 11 organic solvents. Exposure of the PDA array to an unknown solvent promotes color changes, which are imaged using a smartphone camera and analyzed using the app. A comparison of the color changes to the database promoted by the 11 solvents enables the smartphone app to identify the unknown solvent with 100% accuracy. Additionally, it was demonstrated that the PDA array sensor was sufficiently sensitive to accurately detect the 11 VOC gases.

  9. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  10. Effects of red light camera enforcement on fatal crashes in large U.S. cities.

    PubMed

    Hu, Wen; McCartt, Anne T; Teoh, Eric R

    2011-08-01

    To estimate the effects of red light camera enforcement on per capita fatal crash rates at intersections with signal lights. From the 99 large U.S. cities with more than 200,000 residents in 2008, 14 cities were identified with red light camera enforcement programs for all of 2004-2008 but not at any time during 1992-1996, and 48 cities were identified without camera programs during either period. Analyses compared the citywide per capita rate of fatal red light running crashes and the citywide per capita rate of all fatal crashes at signalized intersections during the two study periods, and rate changes then were compared for cities with and without cameras programs. Poisson regression was used to model crash rates as a function of red light camera enforcement, land area, and population density. The average annual rate of fatal red light running crashes declined for both study groups, but the decline was larger for cities with red light camera enforcement programs than for cities without camera programs (35% vs. 14%). The average annual rate of all fatal crashes at signalized intersections decreased by 14% for cities with camera programs and increased slightly (2%) for cities without cameras. After controlling for population density and land area, the rate of fatal red light running crashes during 2004-2008 for cities with camera programs was an estimated 24% lower than what would have been expected without cameras. The rate of all fatal crashes at signalized intersections during 2004-2008 for cities with camera programs was an estimated 17% lower than what would have been expected without cameras. Red light camera enforcement programs were associated with a statistically significant reduction in the citywide rate of fatal red light running crashes and a smaller but still significant reduction in the rate of all fatal crashes at signalized intersections. The study adds to the large body of evidence that red light camera enforcement can prevent the most serious crashes. Communities seeking to reduce crashes at intersections should consider this evidence. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Hierarchical scheme for detecting the rotating MIMO transmission of the in-door RGB-LED visible light wireless communications using mobile-phone camera

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Hao; Chow, Chi-Wai

    2015-01-01

    Multiple-input and multiple-output (MIMO) scheme can extend the transmission capacity for the light-emitting-diode (LED) based visible light communication (VLC) systems. The MIMO VLC system that uses the mobile-phone camera as the optical receiver (Rx) to receive MIMO signal from the n×n Red-Green-Blue (RGB) LED array is desirable. The key step of decoding this signal is to detect the signal direction. If the LED transmitter (Tx) is rotated, the Rx may not realize the rotation and transmission error can occur. In this work, we propose and demonstrate a novel hierarchical transmission scheme which can reduce the computation complexity of rotation detection in LED array VLC system. We use the n×n RGB LED array as the MIMO Tx. In our study, a novel two dimensional Hadamard coding scheme is proposed. Using the different LED color layers to indicate the rotation, a low complexity rotation detection method can be used for improving the quality of received signal. The detection correction rate is above 95% in the indoor usage distance. Experimental results confirm the feasibility of the proposed scheme.

  12. Seeing Red: Discourse, Metaphor, and the Implementation of Red Light Cameras in Texas

    ERIC Educational Resources Information Center

    Hayden, Lance Alan

    2009-01-01

    This study examines the deployment of automated red light camera systems in the state of Texas from 2003 through late 2007. The deployment of new technologies in general, and surveillance infrastructures in particular, can prove controversial and challenging for the formation of public policy. Red light camera surveillance during this period in…

  13. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  14. Analysis of Camera Arrays Applicable to the Internet of Things.

    PubMed

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  15. Reductions in injury crashes associated with red light camera enforcement in oxnard, california.

    PubMed

    Retting, Richard A; Kyrychenko, Sergey Y

    2002-11-01

    This study estimated the impact of red light camera enforcement on motor vehicle crashes in one of the first US communities to employ such cameras-Oxnard, California. Crash data were analyzed for Oxnard and for 3 comparison cities. Changes in crash frequencies were compared for Oxnard and control cities and for signalized and nonsignalized intersections by means of a generalized linear regression model. Overall, crashes at signalized intersections throughout Oxnard were reduced by 7% and injury crashes were reduced by 29%. Right-angle crashes, those most associated with red light violations, were reduced by 32%; right-angle crashes involving injuries were reduced by 68%. Because red light cameras can be a permanent component of the transportation infrastructure, crash reductions attributed to camera enforcement should be sustainable.

  16. Analysis of red light violation data collected from intersections equipped with red light photo enforcement cameras

    DOT National Transportation Integrated Search

    2006-03-01

    This report presents results from an analysis of about 47,000 red light violation records collected from 11 intersections in the : City of Sacramento, California, by red light photo enforcement cameras between May 1999 and June 2003. The goal of this...

  17. Red ball ranging optimization based on dual camera ranging method

    NASA Astrophysics Data System (ADS)

    Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung

    2018-05-01

    In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.

  18. Safety Evaluation of Red Light Running Camera Intersections in Illinois

    DOT National Transportation Integrated Search

    2017-04-01

    As a part of this research, the safety performance of red light running (RLR) camera systems was evaluated for a sample of 41 intersections and 60 RLR camera approaches located on state routes under IDOTs jurisdiction in the Chicago suburbs. Compr...

  19. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  20. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  1. The MVACS Robotic Arm Camera

    NASA Astrophysics Data System (ADS)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  2. Evaluating the Impacts of Red Light Camera Deployment on Intersection Traffic Safety

    DOT National Transportation Integrated Search

    2018-06-01

    Red-light cameras (RLC) are a popular countermeasure to reduce red-light running and improve intersection safety. Studies show that the reduction in side impact crashes at RLC intersections are often accompanied by no-change or an increase in the num...

  3. Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong

    In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm 3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sourcesmore » using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less

  4. Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors

    DOE PAGES

    Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong; ...

    2016-02-15

    In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm 3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sourcesmore » using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less

  5. Estimation of color modification in digital images by CFA pattern change.

    PubMed

    Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-03-10

    Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  7. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  8. Method and apparatus for coherent imaging of infrared energy

    DOEpatents

    Hutchinson, D.P.

    1998-05-12

    A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.

  9. Lighting up a Dead Star's Layers

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This image from NASA's Spitzer Space Telescope shows the scattered remains of an exploded star named Cassiopeia A. Spitzer's infrared detectors 'picked' through these remains and found that much of the star's original layering had been preserved.

    In this false-color image, the faint, blue glow surrounding the dead star is material that was energized by a shock wave, called the forward shock, which was created when the star blew up. The forward shock is now located at the outer edge of the blue glow. Stars are also seen in blue. Green, yellow and red primarily represent material that was ejected in the explosion and heated by a slower shock wave, called the reverse shock wave.

    The picture was taken by Spitzer's infrared array camera and is a composite of 3.6-micron light (blue); 4.5-micron light (green); and 8.0-micron light (red).

  10. Relating transverse ray error and light fields in plenoptic camera images

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim; Tyo, J. Scott

    2013-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.

  11. Development of infrared scene projectors for testing fire-fighter cameras

    NASA Astrophysics Data System (ADS)

    Neira, Jorge E.; Rice, Joseph P.; Amon, Francine K.

    2008-04-01

    We have developed two types of infrared scene projectors for hardware-in-the-loop testing of thermal imaging cameras such as those used by fire-fighters. In one, direct projection, images are projected directly into the camera. In the other, indirect projection, images are projected onto a diffuse screen, which is then viewed by the camera. Both projectors use a digital micromirror array as the spatial light modulator, in the form of a Micromirror Array Projection System (MAPS) engine having resolution of 800 x 600 with mirrors on a 17 micrometer pitch, aluminum-coated mirrors, and a ZnSe protective window. Fire-fighter cameras are often based upon uncooled microbolometer arrays and typically have resolutions of 320 x 240 or lower. For direct projection, we use an argon-arc source, which provides spectral radiance equivalent to a 10,000 Kelvin blackbody over the 7 micrometer to 14 micrometer wavelength range, to illuminate the micromirror array. For indirect projection, an expanded 4 watt CO II laser beam at a wavelength of 10.6 micrometers illuminates the micromirror array and the scene formed by the first-order diffracted light from the array is projected onto a diffuse aluminum screen. In both projectors, a well-calibrated reference camera is used to provide non-uniformity correction and brightness calibration of the projected scenes, and the fire-fighter cameras alternately view the same scenes. In this paper, we compare the two methods for this application and report on our quantitative results. Indirect projection has an advantage of being able to more easily fill the wide field of view of the fire-fighter cameras, which typically is about 50 degrees. Direct projection more efficiently utilizes the available light, which will become important in emerging multispectral and hyperspectral applications.

  12. Optimal design and critical analysis of a high resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne

    2011-03-01

    A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.

  13. Optimal design and critical analysis of a high-resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne

    2012-01-01

    A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.

  14. Demosaicking algorithm for the Kodak-RGBW color filter array

    NASA Astrophysics Data System (ADS)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  15. VizieR Online Data Catalog: Galactic HII region IRAS 16148-5011 content (Mallick+, 2015)

    NASA Astrophysics Data System (ADS)

    Mallick, K. K.; Ojha, D. K.; Tamura, M.; Linz, H.; Samal, M. R.; Ghosh, S. K.

    2015-11-01

    NIR photometric observations in J (1.25um), H (1.63um), and Ks (2.14um) bands (centred on RA=16:18:31, DE=-50:17:32 (2000)) were carried out on 2004 July 29 using the 1.4m Infrared Survey Facility (IRSF) telescope, South Africa. The observations were taken with the help of the Simultaneous InfraRed Imager for Unbiased Survey (SIRIUS) instrument, a three colour simultaneous camera mounted at the f/10 Cassegrain focus of the telescope. Radio continuum observations at 1280MHz were obtained on 2012 November 09 using the Giant Metrewave Radio Telescope (GMRT) array. The GMRT array consists of 30 antennas arranged in an approximate Y-shaped configuration, with each antenna having a diameter of 45m. This translates to a primary beam-size of 26.2-arcmin at 1280MHz. (2 data files).

  16. Development of Radiated Power Diagnostics for NSTX-U

    NASA Astrophysics Data System (ADS)

    Reinke, Matthew; van Eden, G. G.; Lovell, Jack; Peterson, Byron; Gray, Travis; Chandra, Rian; Stratton, Brent; Ellis, Robert; NSTX-U Team

    2016-10-01

    New tools to measure radiated power in NSTX-U are under development to support a range of core and boundary physics research. Multiple resistive bolometer pinhole cameras are being built and calibrated to support FY17 operations, all utilizing standard Au-foil sensors from IPT-Albrecht. The radiation in the lower divertor will be measured using two, 8 channel arrays viewing both vertically and radially to enable estimates of the 2D radiation structure. The core radiation will be measured using a 24 channel array viewing tangentially near the midplane, observing the full cross-section from the inner to outer limiter. This enables characterization of the centrifugally-driven in/out radiation asymmetry expected from mid-Z and high-Z impurities in highly rotating NSTX-U plasmas. All sensors utilize novel FPGA-based BOLO8BLF analyzers from D-tAcq Solutions. Resistive bolometer measurements are complemented by an InfraRed Video Bolometer (IRVB) which measures the temperature change of radiation absorber using an IR camera. A prototype IRVB system viewing the lower divertor was installed on NSTX-U for FY16 operations. Initial results from the plasma and benchtop testing are used to demonstrate the relative advantages between IRVB and resistive bolometers. Supported in Part by DE-AC05-00OR22725 & DE-AC02-09CH11466.

  17. Development of Camera Model and Geometric Calibration/validation of Xsat IRIS Imagery

    NASA Astrophysics Data System (ADS)

    Kwoh, L. K.; Huang, X.; Tan, W. J.

    2012-07-01

    XSAT, launched on 20 April 2011, is the first micro-satellite designed and built in Singapore. It orbits the Earth at altitude of 822 km in a sun synchronous orbit. The satellite carries a multispectral camera IRIS with three spectral bands - 0.52~0.60 mm for Green, 0.63~0.69 mm for Red and 0.76~0.89 mm for NIR at 12 m resolution. In the design of IRIS camera, the three bands were acquired by three lines of CCDs (NIR, Red and Green). These CCDs were physically separated in the focal plane and their first pixels not absolutely aligned. The micro-satellite platform was also not stable enough to allow for co-registration of the 3 bands with simple linear transformation. In the camera model developed, this platform stability was compensated with 3rd to 4th order polynomials for the satellite's roll, pitch and yaw attitude angles. With the camera model, the camera parameters such as the band to band separations, the alignment of the CCDs relative to each other, as well as the focal length of the camera can be validated or calibrated. The results of calibration with more than 20 images showed that the band to band along-track separation agreed well with the pre-flight values provided by the vendor (0.093° and 0.046° for the NIR vs red and for green vs red CCDs respectively). The cross-track alignments were 0.05 pixel and 5.9 pixel for the NIR vs red and green vs red CCDs respectively. The focal length was found to be shorter by about 0.8%. This was attributed to the lower operating temperature which XSAT is currently operating. With the calibrated parameters and the camera model, a geometric level 1 multispectral image with RPCs can be generated and if required, orthorectified imagery can also be produced.

  18. Developments in mercuric iodide gamma ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.E.; Beyerle, A.G.; Dolin, R.C.

    A mercuric iodide gamma-ray imaging array and camera system previously described has been characterized for spatial and energy resolution. Based on this data a new camera is being developed to more fully exploit the potential of the array. Characterization results and design criterion for the new camera will be presented. 2 refs., 7 figs.

  19. Galaxies Gather at Great Distances

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Distant Galaxy Cluster Infrared Survey Poster [figure removed for brevity, see original site] [figure removed for brevity, see original site] Bird's Eye View Mosaic Bird's Eye View Mosaic with Clusters [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] 9.1 Billion Light-Years 8.7 Billion Light-Years 8.6 Billion Light-Years

    Astronomers have discovered nearly 300 galaxy clusters and groups, including almost 100 located 8 to 10 billion light-years away, using the space-based Spitzer Space Telescope and the ground-based Mayall 4-meter telescope at Kitt Peak National Observatory in Tucson, Ariz. The new sample represents a six-fold increase in the number of known galaxy clusters and groups at such extreme distances, and will allow astronomers to systematically study massive galaxies two-thirds of the way back to the Big Bang.

    A mosaic portraying a bird's eye view of the field in which the distant clusters were found is shown at upper left. It spans a region of sky 40 times larger than that covered by the full moon as seen from Earth. Thousands of individual images from Spitzer's infrared array camera instrument were stitched together to create this mosaic. The distant clusters are marked with orange dots.

    Close-up images of three of the distant galaxy clusters are shown in the adjoining panels. The clusters appear as a concentration of red dots near the center of each image. These images reveal the galaxies as they were over 8 billion years ago, since that's how long their light took to reach Earth and Spitzer's infrared eyes.

    These pictures are false-color composites, combining ground-based optical images captured by the Mosaic-I camera on the Mayall 4-meter telescope at Kitt Peak, with infrared pictures taken by Spitzer's infrared array camera. Blue and green represent visible light at wavelengths of 0.4 microns and 0.8 microns, respectively, while red indicates infrared light at 4.5 microns.

    Kitt Peak National Observatory is part of the National Optical Astronomy Observatory in Tuscon, Ariz.

  20. Safety evaluation of red-light cameras

    DOT National Transportation Integrated Search

    2005-04-01

    The objective of this final study was to determine the effectiveness of red-light-camera (RLC) systems in reducing crashes. The study used empirical Bayes before-and-after research using data from seven jurisdictions across the United States at 132 t...

  1. MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera

    NASA Astrophysics Data System (ADS)

    Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.

    2017-10-01

    An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).

  2. NectarCAM, a camera for the medium sized telescopes of the Cherenkov telescope array

    NASA Astrophysics Data System (ADS)

    Glicenstein, J.-F.; Shayduk, M.

    2017-01-01

    NectarCAM is a camera proposed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) which covers the core energy range of 100 GeV to 30 TeV. It has a modular design and is based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 8 degrees. Each module includes photomultiplier bases, high voltage supply, pre-amplifier, trigger, readout and Ethernet transceiver. The recorded events last between a few nanoseconds and tens of nanoseconds. The expected performance of the camera are discussed. Prototypes of NectarCAM components have been built to validate the design. Preliminary results of a 19-module mini-camera are presented, as well as future plans for building and testing a full size camera.

  3. Spillover Effect and Economic Effect of Red Light Cameras

    DOT National Transportation Integrated Search

    2017-04-01

    "Spillover effect" of red light cameras (RLCs) refers to the expected safety improvement at intersections other than those actually treated. Such effects may be due to jurisdiction-wide publicity of RLCs and the general publics lack of knowledge o...

  4. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  5. Science-Filters Study of Martian Rock Sees Hematite

    NASA Image and Video Library

    2017-11-01

    This false-color image demonstrates how use of special filters available on the Mast Camera (Mastcam) of NASA's Curiosity Mars rover can reveal the presence of certain minerals in target rocks. It is a composite of images taken through three "science" filters chosen for making hematite, an iron-oxide mineral, stand out as exaggerated purple. This target rock, called "Christmas Cove," lies in an area on Mars' "Vera Rubin Ridge" where Mastcam reconnaissance imaging (see PIA22065) with science filters suggested a patchy distribution of exposed hematite. Bright lines within the rocks are fractures filled with calcium sulfate minerals. Christmas Cove did not appear to contain much hematite until the rover team conducted an experiment on this target: Curiosity's wire-bristled brush, the Dust Removal Tool, scrubbed the rock, and a close-up with the Mars Hand Lens Imager (MAHLI) confirmed the brushing. The brushed area is about is about 2.5 inches (6 centimeters) across. The next day -- Sept. 17, 2017, on the mission's Sol 1819 -- this observation with Mastcam and others with the Chemistry and Camera (ChemCam showed a strong hematite presence that had been subdued beneath the dust. The team is continuing to explore whether the patchiness in the reconnaissance imaging may result more from variations in the amount of dust cover rather than from variations in hematite content. Curiosity's Mastcam combines two cameras: one with a telephoto lens and the other with a wider-angle lens. Each camera has a filter wheel that can be rotated in front of the lens for a choice of eight different filters. One filter for each camera is clear to all visible light, for regular full-color photos, and another is specifically for viewing the Sun. Some of the other filters were selected to admit wavelengths of light that are useful for identifying iron minerals. Each of the filters used for this image admits light from a narrow band of wavelengths, extending to only about 5 nanometers longer or shorter than the filter's central wavelength. Three observations are combined for this image, each through one of the filters centered at 751 nanometers (in the near-infrared part of the spectrum just beyond red light), 527 nanometers (green) and 445 nanometers (blue). Usual color photographs from digital cameras -- such as a Mastcam one of this same place (see PIA22067) -- also combine information from red, green and blue filtering, but the filters are in a microscopic grid in a "Bayer" filter array situated directly over the detector behind the lens, with wider bands of wavelengths. Mastcam's narrow-band filters used for this view help to increase spectral contrast, making blues bluer and reds redder, particularly with the processing used to boost contrast in each of the component images of this composite. Fine-grained hematite preferentially absorbs sunlight around in the green portion of the spectrum around 527 nanometers. That gives it the purple look from a combination of red and blue light reflected by the hematite and reaching the camera through the other two filters. https://photojournal.jpl.nasa.gov/catalog/PIA22066

  6. Feasibility of Using Video Cameras for Automated Enforcement on Red-Light Running and Managed Lanes.

    DOT National Transportation Integrated Search

    2009-12-01

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and high occupancy vehicle (HOV) occupancy requirement using video cameras in Nev...

  7. Double biprism arrays design using for stereo-photography of mobile phone camera

    NASA Astrophysics Data System (ADS)

    Sun, Wen-Shing; Chu, Pu-Yi; Chao, Yu-Hao; Pan, Jui-Wen; Tien, Chuen-Lin

    2016-11-01

    Generally, mobile phone use one camera to catch the image, and it is hard to get stereo image pair. Adding a biprism array can help that get the image pair easily. So users can use their mobile phone to catch the stereo image anywhere by adding a biprism array, and if they want to get a normal image just remove it. Using biprism arrays will induce chromatic aberration. Therefore, we design a double biprism arrays to reduce chromatic aberration.

  8. "Wow, It Turned out Red! First, a Little Yellow, and Then Red!" 1st-Graders' Work with an Infrared Camera

    ERIC Educational Resources Information Center

    Jeppsson, Fredrik; Frejd, Johanna; Lundmark, Frida

    2017-01-01

    This study focuses on investigating how students make use of their bodily experiences in combination with infrared (IR) cameras, as a way to make meaning in learning about heat, temperature, and friction. A class of 20 primary students (age 7-8 years), divided into three groups, took part in three IR camera laboratory experiments. The qualitative…

  9. Leveraging traffic and surveillance video cameras for urban traffic.

    DOT National Transportation Integrated Search

    2014-12-01

    The objective of this project was to investigate the use of existing video resources, such as traffic : cameras, police cameras, red light cameras, and security cameras for the long-term, real-time : collection of traffic statistics. An additional ob...

  10. Fabrication of multi-focal microlens array on curved surface for wide-angle camera module

    NASA Astrophysics Data System (ADS)

    Pan, Jun-Gu; Su, Guo-Dung J.

    2017-08-01

    In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.

  11. Feasibility of Using Video Camera for Automated Enforcement on Red-Light Running and Managed Lanes.

    DOT National Transportation Integrated Search

    2009-12-25

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and HOV occupancy requirement using video cameras in Nevada. This objective was a...

  12. Intra-cavity upconversion to 631 nm of images illuminated by an eye-safe ASE source at 1550 nm.

    PubMed

    Torregrosa, A J; Maestre, H; Capmany, J

    2015-11-15

    We report an image wavelength upconversion system. The system mixes an incoming image at around 1550 nm (eye-safe region) illuminated by an amplified spontaneous emission (ASE) fiber source with a Gaussian beam at 1064 nm generated in a continuous-wave diode-pumped Nd(3+):GdVO(4) laser. Mixing takes place in a periodically poled lithium niobate (PPLN) crystal placed intra-cavity. The upconverted image obtained by sum-frequency mixing falls around the 631 nm red spectral region, well within the spectral response of standard silicon focal plane array bi-dimensional sensors, commonly used in charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) video cameras, and of most image intensifiers. The use of ASE illumination benefits from a noticeable increase in the field of view (FOV) that can be upconverted with regard to using coherent laser illumination. The upconverted power allows us to capture real-time video in a standard nonintensified CCD camera.

  13. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera

    NASA Astrophysics Data System (ADS)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert

    2018-03-01

    Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.

  14. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  15. Single lens 3D-camera with extended depth-of-field

    NASA Astrophysics Data System (ADS)

    Perwaß, Christian; Wietzke, Lennart

    2012-03-01

    Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.

  16. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  17. Performance of Backshort-Under-Grid Kilopixel TES Arrays for HAWC+

    NASA Technical Reports Server (NTRS)

    Staguhn, J. G.; Benford, D. J.; Dowell, C. D.; Fixsen, D. J.; Hilton, G. C.; Irwin, K. D.; Jhabvala, C. A.; Maher, S. F.; Miller, T. M.; Moseley, S. H.; hide

    2016-01-01

    We present results from laboratory detector characterizations of the first kilopixel BUG arrays for the High- resolution Wideband Camera Plus (HAWC+) which is the imaging far-infrared polarimeter camera for the Stratospheric Observatory for Infrared Astronomy (SOFIA). Our tests demonstrate that the array performance is consistent with the predicted properties. Here, we highlight results obtained for the thermal conductivity, noise performance, detector speed, and first optical results demonstrating the pixel yield of the arrays.

  18. An electrically tunable plenoptic camera using a liquid crystal microlens array.

    PubMed

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-05-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.

  19. An electrically tunable plenoptic camera using a liquid crystal microlens array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074

    2015-05-15

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less

  20. An electrically tunable plenoptic camera using a liquid crystal microlens array

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-05-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.

  1. Trained neurons-based motion detection in optical camera communications

    NASA Astrophysics Data System (ADS)

    Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho

    2018-04-01

    A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.

  2. The ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array: camera DAQ software architecture

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Trifoglio, Massimo; Bulgarelli, Andrea; Gianotti, Fulvio; Fioretti, Valentina; Tacchini, Alessandro; Zoli, Andrea; Malaguti, Giuseppe; Capalbi, Milvia; Catalano, Osvaldo

    2014-07-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype of a Small Size dual-mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.

  3. Concept design of an 80-dual polarization element cryogenic phased array camera for the Arecibo Radio Telescope

    NASA Astrophysics Data System (ADS)

    Cortes-Medellin, German; Parshley, Stephen; Campbell, Donald B.; Warnick, Karl F.; Jeffs, Brian D.; Ganesh, Rajagopalan

    2016-08-01

    This paper presents the current concept design for ALPACA (Advanced L-Band Phased Array Camera for Arecibo) an L-Band cryo-phased array instrument proposed for the 305 m radio telescope of Arecibo. It includes the cryogenically cooled front-end with 160 low noise amplifiers, a RF-over-fiber signal transport and a digital beam former with an instantaneous bandwidth of 312.5 MHz per channel. The camera will digitally form 40 simultaneous beams inside the available field of view of the Arecibo telescope optics, with an expected system temperature goal of 30 K.

  4. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  5. Active hyperspectral imaging using a quantum cascade laser (QCL) array and digital-pixel focal plane array (DFPA) camera.

    PubMed

    Goyal, Anish; Myers, Travis; Wang, Christine A; Kelly, Michael; Tyrrell, Brian; Gokden, B; Sanchez, Antonio; Turner, George; Capasso, Federico

    2014-06-16

    We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light reflected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s.

  6. Initial astronomical results with a new 5-14 micron Si:Ga 58x62 DRO array camera

    NASA Technical Reports Server (NTRS)

    Gezari, Dan; Folz, Walter; Woods, Larry

    1989-01-01

    A new array camera system was developed using a 58 x 62 pixel Si:Ga (gallium doped silicon) DRO (direct readout) photoconductor array detector manufactured by Hughes/Santa Barbara Research Center (SBRC). The camera system is a broad band photometer designed for 5 to 14 micron imaging with large ground-based optical telescopes. In a typical application a 10 micron photon flux of about 10(exp 9) photons sec(exp -1) m(exp -2) microns(exp -1) arcsec(exp -2) is incident in the telescope focal plane, while the detector well capacity of these arrays is 10(exp 5) to 10 (exp 6) electrons. However, when the real efficiencies and operating conditions are accounted for, the 2-channel 3596 pixel array operates with about 1/2 full wells at 10 micron and 10% bandwidth with high duty cycle and no real experimental compromises.

  7. Visualization of Porphyrin-Based Photosensitizer Distribution from Fluorescence Images In Vivo Using an Optimized RGB Camera

    NASA Astrophysics Data System (ADS)

    Liu, L.; Huang, Zh.; Qiu, Zh.; Li, B.

    2018-01-01

    A handheld RGB camera was developed to monitor the in vivo distribution of porphyrin-based photosensitizer (PS) hematoporphyrin monomethyl ether (HMME) in blood vessels during photodynamic therapy (PDT). The focal length, f-number, International Standardization Organization (ISO) sensitivity, and shutter speed of the camera were optimized for the solution sample with various HMME concentrations. After the parameter optimization, it was found that the red intensity value of the fluorescence image was linearly related to the fluorescence intensity under investigated conditions. The RGB camera was then used to monitor the in vivo distribution of HMME in blood vessels in a skin-fold window chamber model. The red intensity value of the recorded RGB fluorescence image was found to be linearly correlated to HMME concentrations in the range 0-24 μM. Significant differences in the red to green intensity ratios were observed between the blood vessels and the surrounding tissue.

  8. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  9. An AzTEC 1.1-mm survey for ULIRGs in the field of the Galaxy Cluster MS0451.6-0305

    NASA Astrophysics Data System (ADS)

    Wardlow, J. L.; Smail, Ian; Wilson, G. W.; Yun, M. S.; Coppin, K. E. K.; Cybulski, R.; Geach, J. E.; Ivison, R. J.; Aretxaga, I.; Austermann, J. E.; Edge, A. C.; Fazio, G. G.; Huang, J.; Hughes, D. H.; Kodama, T.; Kang, Y.; Kim, S.; Mauskopf, P. D.; Perera, T. A.; Scott, K. S.

    2010-02-01

    We have undertaken a deep (σ ~ 1.1 mJy) 1.1-mm survey of the z = 0.54 cluster MS0451.6-0305 using the AzTEC camera on the James Clerk Maxwell Telescope. We detect 36 sources with signal-to-noise ratio (S/N) >= 3.5 in the central 0.10 deg2 and present the AzTEC map, catalogue and number counts. We identify counterparts to 18 sources (50 per cent) using radio, mid-infrared, Spitzer InfraRed Array Camera (IRAC) and Submillimetre Array data. Optical, near- and mid-infrared spectral energy distributions are compiled for the 14 of these galaxies with detectable counterparts, which are expected to contain all likely cluster members. We then use photometric redshifts and colour selection to separate background galaxies from potential cluster members and test the reliability of this technique using archival observations of submillimetre galaxies. We find two potential MS0451-03 members, which, if they are both cluster galaxies, have a total star formation rate (SFR) of ~100Msolaryr-1 - a significant fraction of the combined SFR of all the other galaxies in MS0451-03. We also examine the stacked rest-frame mid-infrared, millimetre and radio emission of cluster members below our AzTEC detection limit, and find that the SFRs of mid-IR-selected galaxies in the cluster and redshift-matched field populations are comparable. In contrast, the average SFR of the morphologically classified late-type cluster population is nearly three times less than the corresponding redshift-matched field galaxies. This suggests that these galaxies may be in the process of being transformed on the red sequence by the cluster environment. Our survey demonstrates that although the environment of MS0451-03 appears to suppress star formation in late-type galaxies, it can support active, dust-obscured mid-IR galaxies and potentially millimetre-detected LIRGs.

  10. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  11. Gamma ray camera

    DOEpatents

    Perez-Mendez, V.

    1997-01-21

    A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.

  12. Gamma ray camera

    DOEpatents

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  13. ARNICA, the NICMOS 3 imaging camera of TIRGO.

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 μm that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1″per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 μm side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.

  14. The Detection and Photometric Redshift Determination of Distant Galaxies using SIRTF's Infrared Array Camera

    NASA Technical Reports Server (NTRS)

    Simpson, C.; Eisenhardt, P.

    1998-01-01

    We investigate the ability of the Space Infrared Telescope Facility's Infrared Array Camera to detect distant (z3) galaxies and measure their photometric redshifts. Our analysis shows that changing the original long wavelength filter specifications provides significant improvements in performance in this and other areas.

  15. The upgrade of the H.E.S.S. cameras

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; Naurois, Mathieu de; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-12-01

    The High Energy Stereoscopic System (HESS) is an array of imaging atmospheric Cherenkov telescopes (IACTs) located in the Khomas highland in Namibia. It was built to detect Very High Energy (VHE > 100 GeV) cosmic gamma rays. Since 2003, HESS has discovered the majority of the known astrophysical VHE gamma-ray sources, opening a new observational window on the extreme non-thermal processes at work in our universe. HESS consists of four 12-m diameter Cherenkov telescopes (CT1-4), which started data taking in 2002, and a larger 28-m telescope (CT5), built in 2012, which lowers the energy threshold of the array to 30 GeV . The cameras of CT1-4 are currently undergoing an extensive upgrade, with the goals of reducing their failure rate, reducing their readout dead time and improving the overall performance of the array. The entire camera electronics has been renewed from ground-up, as well as the power, ventilation and pneumatics systems, and the control and data acquisition software. Only the PMTs and their HV supplies have been kept from the original cameras. Novel technical solutions have been introduced, which will find their way into some of the Cherenkov cameras foreseen for the next-generation Cherenkov Telescope Array (CTA) observatory. In particular, the camera readout system is the first large-scale system based on the analog memory chip NECTAr, which was designed for CTA cameras. The camera control subsystems and the control software framework also pursue an innovative design, exploiting cutting-edge hardware and software solutions which excel in performance, robustness and flexibility. The CT1 camera has been upgraded in July 2015 and is currently taking data; CT2-4 have been upgraded in fall 2016. Together they will assure continuous operation of HESS at its full sensitivity until and possibly beyond the advent of CTA. This contribution describes the design, the testing and the in-lab and on-site performance of all components of the newly upgraded HESS camera.

  16. Experience with the UKIRT InSb array camera

    NASA Technical Reports Server (NTRS)

    Mclean, Ian S.; Casali, Mark M.; Wright, Gillian S.; Aspin, Colin

    1989-01-01

    The cryogenic infrared camera, IRCAM, has been operating routinely on the 3.8 m UK Infrared Telescope on Mauna Kea, Hawaii for over two years. The camera, which uses a 62x58 element Indium Antimonide array from Santa Barbara Research Center, was designed and built at the Royal Observatory, Edinburgh which operates UKIRT on behalf of the UK Science and Engineering Research Council. Over the past two years at least 60% of the available time on UKIRT has been allocated for IRCAM observations. Described here are some of the properties of this instrument and its detector which influence astronomical performance. Observational techniques and the power of IR arrays with some recent astronomical results are discussed.

  17. Spitzer Finds Clarity in the Inner Milky Way

    NASA Technical Reports Server (NTRS)

    2008-01-01

    More than 800,000 frames from NASA's Spitzer Space Telescope were stitched together to create this infrared portrait of dust and stars radiating in the inner Milky Way.

    As inhabitants of a flat galactic disk, Earth and its solar system have an edge-on view of their host galaxy, like looking at a glass dish from its edge. From our perspective, most of the galaxy is condensed into a blurry narrow band of light that stretches completely around the sky, also known as the galactic plane.

    In this mosaic the galactic plane is broken up into five components: the far-left side of the plane (top image); the area just left of the galactic center (second to top); galactic center (middle); the area to the right of galactic center (second to bottom); and the far-right side of the plane (bottom). From Earth, the top two panels are visible to the northern hemisphere, and the bottom two images to the southern hemisphere. Together, these panels represent more than 50 percent of our entire Milky Way galaxy.

    The swaths of green represent organic molecules, called polycyclic aromatic hydrocarbons, which are illuminated by light from nearby star formation, while the thermal emission, or heat, from warm dust is rendered in red. Star-forming regions appear as swirls of red and yellow, where the warm dust overlaps with the glowing organic molecules. The blue specks sprinkled throughout the photograph are Milky Way stars. The bluish-white haze that hovers heavily in the middle panel is starlight from the older stellar population towards the center of the galaxy.

    This is a three-color composite that shows infrared observations from two Spitzer instruments. Blue represents 3.6-micron light and green shows light of 8 microns, both captured by Spitzer's infrared array camera. Red is 24-micron light detected by Spitzer's multiband imaging photometer.

    The Galactic Legacy Infrared Mid-Plane Survey Extraordinaire team (GLIMPSE) used the telescope's infrared array camera to see light from newborn stars, old stars and polycyclic aromatic hydrocarbons. A second group, the Multiband Imaging Photometer for Spitzer Galactic Plane Survey team (MIPSGAL), imaged dust in the inner galaxy with Spitzer's multiband imaging photometer.

  18. A Major Upgrade of the H.E.S.S. Cherenkov Cameras

    NASA Astrophysics Data System (ADS)

    Lypova, Iryna; Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-03-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of imaging atmospheric Cherenkov telescopes (IACTs) located in Namibia. It was built to detect Very High Energy (VHE, >100 GeV) cosmic gamma rays, and consists of four 12 m diameter Cherenkov telescopes (CT1-4), built in 2003, and a larger 28 m telescope (CT5), built in 2012. The larger mirror surface of CT5 permits to lower the energy threshold of the array down to 30 GeV. The cameras of CT1-4 are currently undergoing an extensive upgrade, with the goals of reducing their failure rate, reducing their readout dead time and improving the overall performance of the array. The entire camera electronics has been renewed from ground-up, as well as the power, ventilation and pneumatics systems, and the control and data acquisition software. Technical solutions forseen for the next-generation Cherenkov Telescope Array (CTA) observatory have been introduced, most notably the readout is based on the NECTAr analog memory chip. The camera control subsystems and the control software framework also pursue an innovative design, increasing the camera performance, robustness and flexibility. The CT1 camera has been upgraded in July 2015 and is currently taking data; CT2-4 will upgraded in Fall 2016. Together they will assure continuous operation of H.E.S.S at its full sensitivity until and possibly beyond the advent of CTA. This contribution describes the design, the testing and the in-lab and on-site performance of all components of the newly upgraded H.E.S.S. camera.

  19. An astronomy camera for low background applications in the 1. 0 to 2. 5. mu. m spectral region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaki, S.A.; Bailey, G.C.; Hagood, R.W.

    1989-02-01

    A short wavelength (1.0-2.5 ..mu..m) 128 x 128 focal plane array forms the heart of this low background astronomy camera system. The camera is designed to accept either a 128 x 128 HgCdTe array for the 1-2.5 ..mu..m spectral region or an InSb array for the 3-5 ..mu..m spectral region. A cryogenic folded optical system is utilized to control excess stray light along with a cold eight-position filter wheel for spectral filtering. The camera head and electronics will also accept a 256 x 256 focal plane. Engineering evaluation of the complete system is complete along with two engineering runs atmore » the JPL Table Mountain Observatory. System design, engineering performance, and sample imagery are presented in this paper.« less

  20. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  1. Multi-pulse shadowgraphic RGB illumination and detection for flow tracking

    NASA Astrophysics Data System (ADS)

    Menser, Jan; Schneider, Florian; Dreier, Thomas; Kaiser, Sebastian A.

    2018-06-01

    This work demonstrates the application of a multi-color LED and a consumer color camera for visualizing phase boundaries in two-phase flows, in particular for particle tracking velocimetry. The LED emits a sequence of short light pulses, red, green, then blue (RGB), and through its color-filter array, the camera captures all three pulses on a single RGB frame. In a backlit configuration, liquid droplets appear as shadows in each color channel. Color reversal and color cross-talk correction yield a series of three frozen-flow images that can be used for further analysis, e.g., determining the droplet velocity by particle tracking. Three example flows are presented, solid particles suspended in water, the penetrating front of a gasoline direct-injection spray, and the liquid break-up region of an "air-assisted" nozzle. Because of the shadowgraphic arrangement, long path lengths through scattering media lower image contrast, while visualization of phase boundaries with high resolution is a strength of this method. Apart from a pulse-and-delay generator, the overall system cost is very low.

  2. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  3. 640x480 PtSi Stirling-cooled camera system

    NASA Astrophysics Data System (ADS)

    Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.

    1992-09-01

    A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.

  4. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  5. Motion camera based on a custom vision sensor and an FPGA architecture

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  6. Method and apparatus for coherent imaging of infrared energy

    DOEpatents

    Hutchinson, Donald P.

    1998-01-01

    A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera's two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera's integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting.

  7. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  8. Anatomical features of pepper plants (Capsicum annuum L.) grown under red light-emitting diodes supplemented with blue or far-red light

    NASA Technical Reports Server (NTRS)

    Schuerger, A. C.; Brown, C. S.; Stryjewski, E. C.

    1997-01-01

    Pepper plants (Capsicum annuum L. cv., Hungarian Wax) were grown under metal halide (MH) lamps or light-emitting diode (LED) arrays with different spectra to determine the effects of light quality on plant anatomy of leaves and stems. One LED (660) array supplied 90% red light at 660 nm (25nm band-width at half-peak height) and 1% far-red light between 700-800nm. A second LED (660/735) array supplied 83% red light at 660nm and 17% far-red light at 735nm (25nm band-width at half-peak height). A third LED (660/blue) array supplied 98% red light at 660nm, 1% blue light between 350-550nm, and 1% far-red light between 700-800nm. Control plants were grown under broad spectrum metal halide lamps. Plants were gron at a mean photon flux (300-800nm) of 330 micromol m-2 s-1 under a 12 h day-night photoperiod. Significant anatomical changes in stem and leaf morphologies were observed in plants grown under the LED arrays compared to plants grown under the broad-spectrum MH lamp. Cross-sectional areas of pepper stems, thickness of secondary xylem, numbers of intraxylary phloem bundles in the periphery of stem pith tissues, leaf thickness, numbers of choloplasts per palisade mesophyll cell, and thickness of palisade and spongy mesophyll tissues were greatest in peppers grown under MH lamps, intermediate in plants grown under the 660/blue LED array, and lowest in peppers grown under the 660 or 660/735 LED arrays. Most anatomical features of pepper stems and leaves were similar among plants grown under 660 or 660/735 LED arrays. The effects of spectral quality on anatomical changes in stem and leaf tissues of peppers generally correlate to the amount of blue light present in the primary light source.

  9. Anatomical features of pepper plants (Capsicum annuum L.) grown under red light-emitting diodes supplemented with blue or far-red light.

    PubMed

    Schuerger, A C; Brown, C S; Stryjewski, E C

    1997-03-01

    Pepper plants (Capsicum annuum L. cv., Hungarian Wax) were grown under metal halide (MH) lamps or light-emitting diode (LED) arrays with different spectra to determine the effects of light quality on plant anatomy of leaves and stems. One LED (660) array supplied 90% red light at 660 nm (25nm band-width at half-peak height) and 1% far-red light between 700-800nm. A second LED (660/735) array supplied 83% red light at 660nm and 17% far-red light at 735nm (25nm band-width at half-peak height). A third LED (660/blue) array supplied 98% red light at 660nm, 1% blue light between 350-550nm, and 1% far-red light between 700-800nm. Control plants were grown under broad spectrum metal halide lamps. Plants were gron at a mean photon flux (300-800nm) of 330 micromol m-2 s-1 under a 12 h day-night photoperiod. Significant anatomical changes in stem and leaf morphologies were observed in plants grown under the LED arrays compared to plants grown under the broad-spectrum MH lamp. Cross-sectional areas of pepper stems, thickness of secondary xylem, numbers of intraxylary phloem bundles in the periphery of stem pith tissues, leaf thickness, numbers of choloplasts per palisade mesophyll cell, and thickness of palisade and spongy mesophyll tissues were greatest in peppers grown under MH lamps, intermediate in plants grown under the 660/blue LED array, and lowest in peppers grown under the 660 or 660/735 LED arrays. Most anatomical features of pepper stems and leaves were similar among plants grown under 660 or 660/735 LED arrays. The effects of spectral quality on anatomical changes in stem and leaf tissues of peppers generally correlate to the amount of blue light present in the primary light source.

  10. An evaluation of Winnipeg's photo enforcement safety program: results of time series analyses and an intersection camera experiment.

    PubMed

    Vanlaar, Ward; Robertson, Robyn; Marcoux, Kyla

    2014-01-01

    The objective of this study was to evaluate the impact of Winnipeg's photo enforcement safety program on speeding, i.e., "speed on green", and red-light running behavior at intersections as well as on crashes resulting from these behaviors. ARIMA time series analyses regarding crashes related to red-light running (right-angle crashes and rear-end crashes) and crashes related to speeding (injury crashes and property damage only crashes) occurring at intersections were conducted using monthly crash counts from 1994 to 2008. A quasi-experimental intersection camera experiment was also conducted using roadside data on speeding and red-light running behavior at intersections. These data were analyzed using logistic regression analysis. The time series analyses showed that for crashes related to red-light running, there had been a 46% decrease in right-angle crashes at camera intersections, but that there had also been an initial 42% increase in rear-end crashes. For crashes related to speeding, analyses revealed that the installation of cameras was not associated with increases or decreases in crashes. Results of the intersection camera experiment show that there were significantly fewer red light running violations at intersections after installation of cameras and that photo enforcement had a protective effect on speeding behavior at intersections. However, the data also suggest photo enforcement may be less effective in preventing serious speeding violations at intersections. Overall, Winnipeg's photo enforcement safety program had a positive net effect on traffic safety. Results from both the ARIMA time series and the quasi-experimental design corroborate one another. However, the protective effect of photo enforcement is not equally pronounced across different conditions so further monitoring is required to improve the delivery of this measure. Results from this study as well as limitations are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund

    NASA Technical Reports Server (NTRS)

    Hagyard, Mona J.

    1992-01-01

    The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.

  12. The impact of red light cameras (photo-red enforcement) on crashes in Virginia.

    DOT National Transportation Integrated Search

    2007-01-01

    Red light running is a significant public health concern, killing more than 800 people and injuring 200,000 in the United States per year (Retting et al., 1999a; Retting and Kyrychenko, 2002). To reduce red light running in Virginia, six jurisdiction...

  13. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  14. Detection of pointing errors with CMOS-based camera in intersatellite optical communications

    NASA Astrophysics Data System (ADS)

    Yu, Si-yuan; Ma, Jing; Tan, Li-ying

    2005-01-01

    For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.

  15. Young Stars Emerge from Orion Head

    NASA Image and Video Library

    2007-05-17

    This image from NASA's Spitzer Space Telescope shows infant stars "hatching" in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth . The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's "head," just north of the massive star Lambda Orionis. Wisps of red in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked. This image shows infrared light captured by Spitzer's infrared array camera. Light with wavelengths of 8 and 5.8 microns (red and orange) comes mainly from dust that has been heated by starlight. Light of 4.5 microns (green) shows hot gas and dust; and light of 3.6 microns (blue) is from starlight. http://photojournal.jpl.nasa.gov/catalog/PIA09412

  16. Young Stars Emerge from Orion's Head

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This image from NASA's Spitzer Space Telescope shows infant stars 'hatching' in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth

    The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's 'head,' just north of the massive star Lambda Orionis.

    Wisps of red in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked.

    This image shows infrared light captured by Spitzer's infrared array camera. Light with wavelengths of 8 and 5.8 microns (red and orange) comes mainly from dust that has been heated by starlight. Light of 4.5 microns (green) shows hot gas and dust; and light of 3.6 microns (blue) is from starlight.

  17. First Solar System Results of the Spitzer Space Telescope

    NASA Technical Reports Server (NTRS)

    VanCleve, J.; Cruikshank, D. P.; Stansberry, J. A.; Burgdorf, M. J.; Devost, D.; Emery, J. P.; Fazio, G.; Fernandez, Y. R.; Glaccum, W.; Grillmair, C.

    2004-01-01

    The Spitzer Space Telescope, formerly known as SIRTF, is now operational and delivers unprecedented sensitivity for the observation of Solar System targets. Spitzer's capabilities and first general results were presented at the January 2004 AAS meeting. In this poster, we focus on Spitzer's performance for moving targets, and the first Solar System results. Spitzer has three instruments, IRAC, IRS, and MIPS. IRAC (InfraRed Array Camera) provides simultaneous images at wavelengths of 3.6, 4.5, 5.8, and 8.0 microns. IRS (InfraRed Spectrograph) has 4 modules providing low-resolution (R=60-120) spectra from 5.3 to 40 microns, high-resolution (R=600) spectra from 10 to 37 m, and an autonomous target acquisition system (PeakUp) which includes small-field imaging at 15 m. MIPS (Multiband Imaging Photometer for SIRTF) does imaging photometry at 24, 70, and 160 m and low-resolution (R=15-25) spectroscopy (SED) between 55 and 96 microns. Guaranteed Time Observer (GTO) programs include the moons of the outer Solar System, Pluto, Centaurs, Kuiper Belt Objects, and comets

  18. A Planar Two-Dimensional Superconducting Bolometer Array for the Green Bank Telescope

    NASA Technical Reports Server (NTRS)

    Benford, Dominic; Staguhn, Johannes G.; Chervenak, James A.; Chen, Tina C.; Moseley, S. Harvey; Wollack, Edward J.; Devlin, Mark J.; Dicker, Simon R.; Supanich, Mark

    2004-01-01

    In order to provide high sensitivity rapid imaging at 3.3mm (90GHz) for the Green Bank Telescope - the world's largest steerable aperture - a camera is being built by the University of Pennsylvania, NASA/GSFC, and NRAO. The heart of this camera is an 8x8 close-packed, Nyquist-sampled detector array. We have designed and are fabricating a functional superconducting bolometer array system using a monolithic planar architecture. Read out by SQUID multiplexers, the superconducting transition edge sensors will provide fast, linear, sensitive response for high performance imaging. This will provide the first ever superconducting bolometer array on a facility instrument.

  19. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  20. Feasibility Study of Utilizing Existing Infrared Array Cameras for Daylight Star Tracking on NASA's Ultra Long Duration Balloon (ULDB) Missions

    NASA Technical Reports Server (NTRS)

    Tueller, Jack (Technical Monitor); Fazio, Giovanni G.; Tolls, Volker

    2004-01-01

    The purpose of this study was to investigate the feasibility of developing a daytime star tracker for ULDB flights using a commercially available off-the-shelf infrared array camera. This report describes the system used for ground-based tests, the observations, the test results, and gives recommendations for continued development.

  1. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  2. ARNICA: the Arcetri Observatory NICMOS3 imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.

    1993-10-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.

  3. Performance and field tests of a handheld Compton camera using 3-D position-sensitive scintillators coupled to multi-pixel photon counter arrays

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Takeuchi, K.; Okochi, H.; Ogata, H.; Kuroshima, H.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Adachi, S.; Uchiyama, T.; Suzuki, H.

    2014-11-01

    After the nuclear disaster in Fukushima, radiation decontamination has become particularly urgent. To help identify radiation hotspots and ensure effective decontamination operation, we have developed a novel Compton camera based on Ce-doped Gd3Al2Ga3O12 scintillators and multi-pixel photon counter (MPPC) arrays. Even though its sensitivity is several times better than that of other cameras being tested in Fukushima, we introduce a depth-of-interaction (DOI) method to further improve the angular resolution. For gamma rays, the DOI information, in addition to 2-D position, is obtained by measuring the pulse-height ratio of the MPPC arrays coupled to ends of the scintillator. We present the detailed performance and results of various field tests conducted in Fukushima with the prototype 2-D and DOI Compton cameras. Moreover, we demonstrate stereo measurement of gamma rays that enables measurement of not only direction but also approximate distance to radioactive hotspots.

  4. The NASA - Arc 10/20 micron camera

    NASA Technical Reports Server (NTRS)

    Roellig, T. L.; Cooper, R.; Deutsch, L. K.; Mccreight, C.; Mckelvey, M.; Pendleton, Y. J.; Witteborn, F. C.; Yuen, L.; Mcmahon, T.; Werner, M. W.

    1994-01-01

    A new infrared camera (AIR Camera) has been developed at NASA - Ames Research Center for observations from ground-based telescopes. The heart of the camera is a Hughes 58 x 62 pixel Arsenic-doped Silicon detector array that has the spectral sensitivity range to allow observations in both the 10 and 20 micron atmospheric windows.

  5. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    NASA Astrophysics Data System (ADS)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  6. Method for determining and displaying the spacial distribution of a spectral pattern of received light

    DOEpatents

    Bennett, C.L.

    1996-07-23

    An imaging Fourier transform spectrometer is described having a Fourier transform infrared spectrometer providing a series of images to a focal plane array camera. The focal plane array camera is clocked to a multiple of zero crossing occurrences as caused by a moving mirror of the Fourier transform infrared spectrometer and as detected by a laser detector such that the frame capture rate of the focal plane array camera corresponds to a multiple of the zero crossing rate of the Fourier transform infrared spectrometer. The images are transmitted to a computer for processing such that representations of the images as viewed in the light of an arbitrary spectral ``fingerprint`` pattern can be displayed on a monitor or otherwise stored and manipulated by the computer. 2 figs.

  7. Plenoptic camera based on a liquid crystal microlens array

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Xie, Changsheng

    2015-09-01

    A type of liquid crystal microlens array (LCMLA) with tunable focal length by the voltage signals applied between its top and bottom electrodes, is fabricated and then the common optical focusing characteristics are tested. The relationship between the focal length and the applied voltage signals is given. The LCMLA is integrated with an image sensor and further coupled with a main lens so as to construct a plenoptic camera. Several raw images at different voltage signals applied are acquired and contrasted through the LCMLA-based plenoptic camera constructed by us. Our experiments demonstrate that through utilizing a LCMLA in a plenoptic camera, the focused zone of the LCMLA-based plenoptic camera can be shifted effectively only by changing the voltage signals loaded between the electrodes of the LCMLA, which is equivalent to the extension of the depth of field.

  8. Floodwaters Renew Zambia's Kafue Wetland

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Not all floods are unwanted. Heavy rainfall in southern Africa between December 2003 and April 2004 provided central Zambia with floodwaters needed to support the diverse uses of water within the Kafue Flats area. The Kafue Flats are home to about one million people and provide a rich inland fishery, habitat for an array of unique wildlife, and the means for hydroelectricity production. The Flats falls between two dams: Upstream to the west (not visible here) is the Izhi-tezhi, and downstream (middle right of the images) is the Kafue Gorge dam. Since the construction of these dams, the flooded area has been reduced and the timing and intensity of the inundation has changed. During June 2004 an agreement was made with the hydroelectricity company to restore water releases from the dams according to a more natural flooding regime. These images from NASA's Multi-angle Imaging SpectroRadiometer (MISR) illustrate surface changes to the wetlands and other surfaces in central Zambia resulting from an unusually lengthy wet season. The Kafue Flats appear relatively dry on July 19, 2003 (upper images), with the Kafue River visible as a slender dark line that snakes from east to west on its way to join the Zambezi (visible in the lower right-hand corner). On July 21, 2004 (lower images), well into the dry season, much of the 6,500-square kilometer area of the Kafue Flats remains inundated. To the east of the Kafue Flats is Lusaka, the Zambian capital, visible as a pale area in the middle right of the picture, north of the river. In the upper portions of these images is the prominent roundish shape of the Lukanga Swamp, another important wetland.

    The images along the left are natural-color views from MISR's nadir camera, and the images along the right are angular composites in which red band data from MISR's 46o forward, nadir, and 46o backward viewing cameras is displayed as red, green and blue, respectively. In order to preserve brightness variations among the various cameras, the data from each camera were processed identically. Here, color changes indicate surface texture, and are influenced by terrain, vegetation structure, soil type and soil moisture content. Wet surfaces or areas with standing water appear blue in this display because sun glitter makes smooth, wet surfaces look brighter at the backward camera's view angle. Mostly the landscape appears somewhat purple, indicating that most of the surfaces scatter sunlight in both backward and forward directions. Areas that appear with a slight greenish hue can indicate sparce vegetation, since the nadir camera is more likely to sight the gaps between the trees or shrubs, and since vegetation is darker (in the red band) than the underlying soil surface. Areas which preferentially exhibit a red or pink hue correspond with wetland vegetation. The plateau of the Kafue National Park, to the west of Lukanga Swamp, appears brighter in 2004 compared with 2003, which indicates weaker absorption at the red band. Overall, the 2004 image exhibits a subtle blue hue (preference for forward-scattering) compared with 2003, which indicates overall surface changes that may be a result of enhanced surface wetness.

    The Multiangle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 19072 and 24421. The panels cover an area of 235 kilometers x 239 kilometers, and utilize data from blocks 100 to 103 within World Reference System-2 path 172.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  9. Dwarf Galaxies Swimming in Tidal Tails

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This false-color infrared image from NASA's Spitzer Space Telescope shows little 'dwarf galaxies' forming in the 'tails' of two larger galaxies that are colliding together. The big galaxies are at the center of the picture, while the dwarfs can be seen as red dots in the red streamers, or tidal tails. The two blue dots above the big galaxies are stars in the foreground.

    Galaxy mergers are common occurrences in the universe; for example, our own Milky Way galaxy will eventually smash into the nearby Andromeda galaxy. When two galaxies meet, they tend to rip each other apart, leaving a trail, called a tidal tail, of gas and dust in their wake. It is out of this galactic debris that new dwarf galaxies are born.

    The new Spitzer picture demonstrates that these particular dwarfs are actively forming stars. The red color indicates the presence of dust produced in star-forming regions, including organic molecules called polycyclic aromatic hydrocarbons. These carbon-containing molecules are also found on Earth, in car exhaust and on burnt toast, among other places. Here, the molecules are being heated up by the young stars, and, as a result, shine in infrared light.

    This image was taken by the infrared array camera on Spitzer. It is a 4-color composite of infrared light, showing emissions from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange), and 8.0 microns (red). Starlight has been subtracted from the orange and red channels in order to enhance the dust features.

  10. Concave Surround Optics for Rapid Multi-View Imaging

    DTIC Science & Technology

    2006-11-01

    thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically

  11. EVA 2 activity on Flight Day 5 to survey the HST solar array panels

    NASA Image and Video Library

    1997-02-15

    STS082-719-002 (14 Feb. 1997) --- Astronaut Joseph R. Tanner (right) stands on the end of Discovery's Remote Manipulator System (RMS) arm and aims a camera at the solar array panels on the Hubble Space Telescope (HST) as astronaut Gregory J. Harbaugh assists. The second Extravehicular Activity (EVA) photograph was taken with a 70mm camera from inside Discovery's cabin.

  12. PNIC - A near infrared camera for testing focal plane arrays

    NASA Astrophysics Data System (ADS)

    Hereld, Mark; Harper, D. A.; Pernic, R. J.; Rauscher, Bernard J.

    1990-07-01

    This paper describes the design and the performance of the Astrophysical Research Consortium prototype near-infrared camera (pNIC) designed to test focal plane arrays both on and off the telescope. Special attention is given to the detector in pNIC, the mechanical and optical designs, the electronics, and the instrument interface. Experiments performed to illustrate the most salient aspects of pNIC are described.

  13. The system analysis of light field information collection based on the light field imaging

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Li, Wenhua; Hao, Chenyang

    2016-10-01

    Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.

  14. Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera

    PubMed Central

    Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing

    2018-01-01

    The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885

  15. The GISMO-2 Bolometer Camera

    NASA Technical Reports Server (NTRS)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  16. HST Solar Arrays photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-07

    S61-E-020 (7 Dec 1993) --- This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993, in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  17. New Views of a Familiar Beauty

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 2Figure 3Figure 4Figure 5

    This image composite compares the well-known visible-light picture of the glowing Trifid Nebula (left panel) with infrared views from NASA's Spitzer Space Telescope (remaining three panels). The Trifid Nebula is a giant star-forming cloud of gas and dust located 5,400 light-years away in the constellation Sagittarius.

    The false-color Spitzer images reveal a different side of the Trifid Nebula. Where dark lanes of dust are visible trisecting the nebula in the visible-light picture, bright regions of star-forming activity are seen in the Spitzer pictures. All together, Spitzer uncovered 30 massive embryonic stars and 120 smaller newborn stars throughout the Trifid Nebula, in both its dark lanes and luminous clouds. These stars are visible in all the Spitzer images, mainly as yellow or red spots. Embryonic stars are developing stars about to burst into existence. Ten of the 30 massive embryos discovered by Spitzer were found in four dark cores, or stellar 'incubators,' where stars are born. Astronomers using data from the Institute of Radioastronomy millimeter telescope in Spain had previously identified these cores but thought they were not quite ripe for stars. Spitzer's highly sensitive infrared eyes were able to penetrate all four cores to reveal rapidly growing embryos.

    Astronomers can actually count the individual embryos tucked inside the cores by looking closely at the Spitzer image taken by its infrared array camera (figure 4). This instrument has the highest spatial resolution of Spitzer's imaging cameras. The Spitzer image from the multiband imaging photometer (figure 5), on the other hand, specializes in detecting cooler materials. Its view highlights the relatively cool core material falling onto the Trifid's growing embryos. The middle panel is a combination of Spitzer data from both of these instruments.

    The embryos are thought to have been triggered by a massive 'type O' star, which can be seen as a white spot at the center of the nebula in all four images. Type O stars are the most massive stars, ending their brief lives in explosive supernovas. The small newborn stars probably arose at the same time as the O star, and from the same original cloud of gas and dust.

    The Spitzer infrared array camera image is a three-color composite of invisible light, showing emissions from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 and 8.0 microns (red). The Spitzer multiband imaging photometer image (figure 3) shows 24-micron emissions. The Spitzer mosaic image combines data from these pictures, showing light of 4.5 microns (blue), 8.0 microns (green) and 24 microns (red). The visible-light image (figure 2) is from the National Optical Astronomy Observatory, Tucson, Ariz.

  18. Multi-Angle Snowflake Camera Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shkurko, Konstantin; Garrett, T.; Gaustad, K

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32more » mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.« less

  19. STS-32 photographic equipment (cameras,lenses,film magazines) on flight deck

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-32 photographic equipment is displayed on the aft flight deck of Columbia, Orbiter Vehicle (OV) 102. On the payload station are a dual camera mount with two handheld HASSELBLAD cameras, camera lenses, and film magazines. This array of equipment will be used to record onboard activities and observations of the Earth's surface.

  20. Condenser for photolithography system

    DOEpatents

    Sweatt, William C.

    2004-03-02

    A condenser for a photolithography system, in which a mask image from a mask is projected onto a wafer through a camera having an entrance pupil, includes a source of propagating radiation, a first mirror illuminated by the radiation, a mirror array illuminated by the radiation reflected from said first mirror, and a second mirror illuminated by the radiation reflected from the array. The mirror array includes a plurality of micromirrors. Each of the micromirrors is selectively actuatable independently of each other. The first mirror and the second mirror are disposed such that the source is imaged onto a plane of the mask and the mirror array is imaged into the entrance pupil of the camera.

  1. A millimetre-wave MIMO radar system for threat detection in urban environments

    NASA Astrophysics Data System (ADS)

    Kirschner, A. J.; Guetlein, J.; Bertl, S.; Detlefsen, J.

    2012-10-01

    The European Defence Agency (EDA) engages countermeasures against Improvised Explosive Devices (IEDs) by funding several scientific programs on threat awareness, countermeasures IEDs or land-mine detection, in which this work is only one of numerous projects. The program, denoted as Surveillance in an urban environment using mobile sensors (SUM), covers the idea of equipping one or more vehicles of a patrol or a convoy with a set of sensors exploiting different physical principles in order to gain detailed insights of the road situation ahead. In order to give an added value to a conventional visual camera system, measurement data from an infra-red (IR) camera, a radiometer and a millimetre-wave radar are fused with data from an optical image and are displayed on a human-machine-interface (HMI) which shall assist the vehicle's co-driver to identify suspect objects or persons on or next to the road without forcing the vehicle to stop its cruise. This paper shall especially cover the role of the millimetre-wave radar sensor and its different operational modes. Measurement results are discussed. It is possible to alter the antenna mechanically which gives two choices for a field of view and angular resolution trade-off. Furthermore a synthetic aperture radar mode is possible and has been tested successfully. MIMO radar principles like orthogonal signal design were exploited tofrom a virtual array by 4 transmitters and 4 receivers. In joint evaluation, it was possible to detect e.g. grenade shells under cardboard boxes or covered metal barrels which were invisible for optical or infra-red detection.

  2. Upgraded cameras for the HESS imaging atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gérard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-François; Gräber, Tobias; Hinton, James; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, François

    2016-08-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes, sensitive to cosmic gamma rays of energies between 30 GeV and several tens of TeV. Four of them started operations in 2003 and their photomultiplier tube (PMT) cameras are currently undergoing a major upgrade, with the goals of improving the overall performance of the array and reducing the failure rate of the ageing systems. With the exception of the 960 PMTs, all components inside the camera have been replaced: these include the readout and trigger electronics, the power, ventilation and pneumatic systems and the control and data acquisition software. New designs and technical solutions have been introduced: the readout makes use of the NECTAr analog memory chip, which samples and stores the PMT signals and was developed for the Cherenkov Telescope Array (CTA). The control of all hardware subsystems is carried out by an FPGA coupled to an embedded ARM computer, a modular design which has proven to be very fast and reliable. The new camera software is based on modern C++ libraries such as Apache Thrift, ØMQ and Protocol buffers, offering very good performance, robustness, flexibility and ease of development. The first camera was upgraded in 2015, the other three cameras are foreseen to follow in fall 2016. We describe the design, the performance, the results of the tests and the lessons learned from the first upgraded H.E.S.S. camera.

  3. Jupiter-Io Montage

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This is a montage of New Horizons images of Jupiter and its volcanic moon Io, taken during the spacecraft's Jupiter flyby in early 2007. The Jupiter image is an infrared color composite taken by the spacecraft's near-infrared imaging spectrometer, the Linear Etalon Imaging Spectral Array (LEISA) at 1:40 UT on Feb. 28, 2007. The infrared wavelengths used (red: 1.59 um, green: 1.94 um, blue: 1.85 um) highlight variations in the altitude of the Jovian cloud tops, with blue denoting high-altitude clouds and hazes, and red indicating deeper clouds. The prominent bluish-white oval is the Great Red Spot. The observation was made at a solar phase angle of 75 degrees but has been projected onto a crescent to remove distortion caused by Jupiter's rotation during the scan. The Io image, taken at 00:25 UT on March 1st 2007, is an approximately true-color composite taken by the panchromatic Long-Range Reconnaissance Imager (LORRI), with color information provided by the 0.5 um ('blue') and 0.9 um ('methane') channels of the Multispectral Visible Imaging Camera (MVIC). The image shows a major eruption in progress on Io's night side, at the northern volcano Tvashtar. Incandescent lava glows red beneath a 330-kilometer high volcanic plume, whose uppermost portions are illuminated by sunlight. The plume appears blue due to scattering of light by small particles in the plume

    This montage appears on the cover of the Oct. 12, 2007, issue of Science magazine.

  4. Ga:Ge array development

    NASA Technical Reports Server (NTRS)

    Young, Erick T.; Rieke, G. H.; Low, Frank J.; Haller, E. E.; Beeman, J. W.

    1989-01-01

    Work at the University of Arizona and at Lawrence Berkeley Laboratory on the development of a far infrared array camera for the Multiband Imaging Photometer on the Space Infrared Telescope Facility (SIRTF) is discussed. The camera design uses stacked linear arrays of Ge:Ga photoconductors to make a full two-dimensional array. Initial results from a 1 x 16 array using a thermally isolated J-FET readout are presented. Dark currents below 300 electrons s(exp -1) and readout noises of 60 electrons were attained. Operation of these types of detectors in an ionizing radiation environment are discussed. Results of radiation testing using both low energy gamma rays and protons are given. Work on advanced C-MOS cascode readouts that promise lower temperature operation and higher levels of performance than the current J-FET based devices is described.

  5. The Advanced Gamma-ray Imaging System (AGIS): Real Time Stereoscopic Array Trigger

    NASA Astrophysics Data System (ADS)

    Byrum, K.; Anderson, J.; Buckley, J.; Cundiff, T.; Dawson, J.; Drake, G.; Duke, C.; Haberichter, B.; Krawzcynski, H.; Krennrich, F.; Madhavan, A.; Schroedter, M.; Smith, A.

    2009-05-01

    Future large arrays of Imaging Atmospheric Cherenkov telescopes (IACTs) such as AGIS and CTA are conceived to comprise of 50 - 100 individual telescopes each having a camera with 10**3 to 10**4 pixels. To maximize the capabilities of such IACT arrays with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We describe the design of a stereoscopic array trigger that calculates image parameters and then correlates them across a subset of telescopes. Fast Field Programmable Gate Array technology allows to use lookup tables at the array trigger level to form a real-time pattern recognition trigger tht capitalizes on the multiple view points of the shower at different shower core distances. A proof of principle system is currently under construction. It is based on 400 MHz FPGAs and the goal is for camera trigger rates of up to 10 MHz and a tunable cosmic-ray background suppression at the array level.

  6. Using a plenoptic camera to measure distortions in wavefronts affected by atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Eslami, Mohammed; Wu, Chensheng; Rzasa, John; Davis, Christopher C.

    2012-10-01

    Ideally, as planar wave fronts travel through an imaging system, all rays, or vectors pointing in the direction of the propagation of energy are parallel, and thus the wave front is focused to a particular point. If the wave front arrives at an imaging system with energy vectors that point in different directions, each part of the wave front will be focused at a slightly different point on the sensor plane and result in a distorted image. The Hartmann test, which involves the insertion of a series of pinholes between the imaging system and the sensor plane, was developed to sample the wavefront at different locations and measure the distortion angles at different points in the wave front. An adaptive optic system, such as a deformable mirror, is then used to correct for these distortions and allow the planar wave front to focus at the point desired on the sensor plane, thereby correcting the distorted image. The apertures of a pinhole array limit the amount of light that reaches the sensor plane. By replacing the pinholes with a microlens array each bundle of rays is focused to brighten the image. Microlens arrays are making their way into newer imaging technologies, such as "light field" or "plenoptic" cameras. In these cameras, the microlens array is used to recover the ray information of the incoming light by using post processing techniques to focus on objects at different depths. The goal of this paper is to demonstrate the use of these plenoptic cameras to recover the distortions in wavefronts. Taking advantage of the microlens array within the plenoptic camera, CODE-V simulations show that its performance can provide more information than a Shack-Hartmann sensor. Using the microlens array to retrieve the ray information and then backstepping through the imaging system provides information about distortions in the arriving wavefront.

  7. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  8. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  9. High signal-to-noise-ratio electro-optical terahertz imaging system based on an optical demodulating detector array.

    PubMed

    Spickermann, Gunnar; Friederich, Fabian; Roskos, Hartmut G; Bolívar, Peter Haring

    2009-11-01

    We present a 64x48 pixel 2D electro-optical terahertz (THz) imaging system using a photonic mixing device time-of-flight camera as an optical demodulating detector array. The combination of electro-optic detection with a time-of-flight camera increases sensitivity drastically, enabling the use of a nonamplified laser source for high-resolution real-time THz electro-optic imaging.

  10. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.

    PubMed

    Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K

    2010-09-01

    We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

  11. Dynamic characteristics of far-field radiation of current modulated phase-locked diode laser arrays

    NASA Technical Reports Server (NTRS)

    Elliott, R. A.; Hartnett, K.

    1987-01-01

    A versatile and powerful streak camera/frame grabber system for studying the evolution of the near and far field radiation patterns of diode lasers was assembled and tested. Software needed to analyze and display the data acquired with the steak camera/frame grabber system was written and the total package used to record and perform preliminary analyses on the behavior of two types of laser, a ten emitter gain guided array and a flared waveguide Y-coupled array. Examples of the information which can be gathered with this system are presented.

  12. NECTAr: New electronics for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Vorobiov, S.; Bolmont, J.; Corona, P.; Delagnes, E.; Feinstein, F.; Gascón, D.; Glicenstein, J.-F.; Naumann, C. L.; Nayman, P.; Sanuy, A.; Toussenel, F.; Vincent, P.

    2011-05-01

    The European astroparticle physics community aims to design and build the next generation array of Imaging Atmospheric Cherenkov Telescopes (IACTs), that will benefit from the experience of the existing H.E.S.S. and MAGIC detectors, and further expand the very-high energy astronomy domain. In order to gain an order of magnitude in sensitivity in the 10 GeV to >100TeV range, the Cherenkov Telescope Array (CTA) will employ 50-100 mirrors of various sizes equipped with 1000-4000 channels per camera, to be compared with the 6000 channels of the final H.E.S.S. array. A 3-year program, started in 2009, aims to build and test a demonstrator module of a generic CTA camera. We present here the NECTAr design of front-end electronics for the CTA, adapted to the trigger and data acquisition of a large IACTs array, with simple production and maintenance. Cost and camera performances are optimized by maximizing integration of the front-end electronics (amplifiers, fast analog samplers, ADCs) in an ASIC, achieving several GS/s and a few μs readout dead-time. We present preliminary results and extrapolated performances from Monte Carlo simulations.

  13. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-07

    STS-109 Astronaut Michael J. Massimino, mission specialist, perched on the Shuttle's robotic arm is working at the stowage area for the Hubble Space Telescope's port side solar array. Working in tandem with James. H. Newman, Massimino removed the old port solar array and stored it in Columbia's payload bay for return to Earth. The two went on to install a third generation solar array and its associated electrical components. Two crew mates had accomplished the same feat with the starboard array on the previous day. In addition to the replacement of the solar arrays, the STS-109 crew also installed the experimental cooling system for the Hubble's Near-Infrared Camera (NICMOS), replaced the power control unit (PCU), and replaced the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS). The 108th flight overall in NASA's Space Shuttle Program, the Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 for 10 days, 22 hours, and 11 minutes. Five space walks were conducted to complete the HST upgrades. The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built.

  14. Linear array of photodiodes to track a human speaker for video recording

    NASA Astrophysics Data System (ADS)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  15. Performance verification of the FlashCam prototype camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Werner, F.; Bauer, C.; Bernhard, S.; Capasso, M.; Diebold, S.; Eisenkolb, F.; Eschbach, S.; Florin, D.; Föhr, C.; Funk, S.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Lahmann, R.; Marszalek, A.; Pfeifer, M.; Principe, G.; Pühlhofer, G.; Pürckhauer, S.; Rajda, P. J.; Reimer, O.; Santangelo, A.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Wolf, D.; Zietara, K.; CTA Consortium

    2017-12-01

    The Cherenkov Telescope Array (CTA) is a future gamma-ray observatory that is planned to significantly improve upon the sensitivity and precision of the current generation of Cherenkov telescopes. The observatory will consist of several dozens of telescopes with different sizes and equipped with different types of cameras. Of these, the FlashCam camera system is the first to implement a fully digital signal processing chain which allows for a traceable, configurable trigger scheme and flexible signal reconstruction. As of autumn 2016, a prototype FlashCam camera for the medium-sized telescopes of CTA nears completion. First results of the ongoing system tests demonstrate that the signal chain and the readout system surpass CTA requirements. The stability of the system is shown using long-term temperature cycling.

  16. Progress in passive submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  17. Oil Fire Plumes Over Baghdad

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Dark smoke from oil fires extend for about 60 kilometers south of Iraq's capital city of Baghdad in these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 2, 2003. The thick, almost black smoke is apparent near image center and contains chemical and particulate components hazardous to human health and the environment.

    The top panel is from MISR's vertical-viewing (nadir) camera. Vegetated areas appear red here because this display is constructed using near-infrared, red and blue band data, displayed as red, green and blue, respectively, to produce a false-color image. The bottom panel is a combination of two camera views of the same area and is a 3-D stereo anaglyph in which red band nadir camera data are displayed as red, and red band data from the 60-degree backward-viewing camera are displayed as green and blue. Both panels are oriented with north to the left in order to facilitate stereo viewing. Viewing the 3-D anaglyph with red/blue glasses (with the red filter placed over the left eye and the blue filter over the right) makes it possible to see the rising smoke against the surface terrain. This technique helps to distinguish features in the atmosphere from those on the surface. In addition to the smoke, several high, thin cirrus clouds (barely visible in the nadir view) are readily observed using the stereo image.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17489. The panels cover an area of about 187 kilometers x 123 kilometers, and use data from blocks 63 to 65 within World Reference System-2 path 168.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  18. Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    DTIC Science & Technology

    2011-03-24

    as the change in blur at different distances from the pixel plane can be inferred. Cameras with a micro lens array (called plenoptic cameras...images from 8 slightly different perspectives [14,43]. Dappled photography is a similar to the plenoptic camera approach except that a cosine mask

  19. The GCT camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-12-01

    The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.

  20. Using the Xbox Kinect sensor for positional data acquisition

    NASA Astrophysics Data System (ADS)

    Ballester, Jorge; Pheatt, Chuck

    2013-01-01

    The Kinect sensor was introduced in November 2010 by Microsoft for the Xbox 360 video game system. It is designed to be positioned above or below a video display to track player body and hand movements in three dimensions (3D). The sensor contains a red, green, and blue (RGB) camera, a depth sensor, an infrared (IR) light source, a three-axis accelerometer, and a multi-array microphone, as well as hardware required to transmit sensor information to an external receiver. In this article, we evaluate the capabilities of the Kinect sensor as a 3D data-acquisition platform for use in physics experiments. Data obtained for a simple pendulum, a spherical pendulum, projectile motion, and a bouncing basketball are presented. Overall, the Kinect sensor is found to be a useful data-acquisition tool for motion studies in the physics laboratory.

  1. Next generation miniature simultaneous multi-hyperspectral imaging systems

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Gupta, Neelam

    2014-03-01

    The concept for a hyperspectral imaging system using a Fabry-Perot tunable filter (FPTF) array that is fabricated using "miniature optical electrical mechanical system" (MOEMS) technology. [1] Using an array of FPTF as an approach to hyperspectral imaging relaxes wavelength tuning requirements considerably because of the reduced portion of the spectrum that is covered by each element in the array. In this paper, Pacific Advanced Technology and ARL present the results of a concept design and performed analysis of a MOEMS based tunable Fabry-Perot array (FPTF) to perform simultaneous multispectral and hyperspectral imaging with relatively high spatial resolution. The concept design was developed with support of an Army SBIR Phase I program The Fabry-Perot tunable MOEMS filter array was combined with a miniature optics array and a focal plane array of 1024 x 1024 pixels to produce 16 colors every frame of the camera. Each color image has a spatial resolution of 256 x 256 pixels with an IFOV of 1.7 mrads and FOV of 25 degrees. The spectral images are collected simultaneously allowing high resolution spectral-spatial-temporal information in each frame of the camera, thus enabling the implementation of spectral-temporal-spatial algorithms in real-time to provide high sensitivity for the detection of weak signals in a high clutter background environment with low sensitivity to camera motion. The challenge in the design was the independent actuation of each Fabry Perot element in the array allowing for individual tuning. An additional challenge was the need to maximize the fill factor to improve the spatial coverage with minimal dead space. This paper will only address the concept design and analysis of the Fabry-Perot tunable filter array. A previous paper presented at SPIE DSS in 2012 explained the design of the optical array.

  2. Method for determining and displaying the spacial distribution of a spectral pattern of received light

    DOEpatents

    Bennett, Charles L.

    1996-01-01

    An imaging Fourier transform spectrometer (10, 210) having a Fourier transform infrared spectrometer (12) providing a series of images (40) to a focal plane array camera (38). The focal plane array camera (38) is clocked to a multiple of zero crossing occurrences as caused by a moving mirror (18) of the Fourier transform infrared spectrometer (12) and as detected by a laser detector (50) such that the frame capture rate of the focal plane array camera (38) corresponds to a multiple of the zero crossing rate of the Fourier transform infrared spectrometer (12). The images (40) are transmitted to a computer (45) for processing such that representations of the images (40) as viewed in the light of an arbitrary spectral "fingerprint" pattern can be displayed on a monitor (60) or otherwise stored and manipulated by the computer (45).

  3. High energy X-ray pinhole imaging at the Z facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McPherson, L. Armon; Ampleford, David J.; Coverdale, Christine A.

    A new high photon energy (hv > 15 keV) time-integrated pinhole camera (TIPC) has become available at the Z facility for diagnostic applications. This camera employs five pinholes in a linear array for recording five images at once onto an image plate detector. Each pinhole may be independently filtered to yield five different spectral responses. The pinhole array is fabricated from a 1-cm thick tungsten block and is available with either straight pinholes or conical pinholes. Each pinhole within the array block is 250 μm in diameter. The five pinholes are splayed with respect to each other such that theymore » point to the same location in space, and hence present the same view of the target load at the Z facility. The fielding distance is 66 cm and the nominal image magnification is 0.374. Initial experimental results are shown to illustrate the performance of the camera.« less

  4. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  5. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  6. High energy X-ray pinhole imaging at the Z facility

    DOE PAGES

    McPherson, L. Armon; Ampleford, David J.; Coverdale, Christine A.; ...

    2016-06-06

    A new high photon energy (hv > 15 keV) time-integrated pinhole camera (TIPC) has become available at the Z facility for diagnostic applications. This camera employs five pinholes in a linear array for recording five images at once onto an image plate detector. Each pinhole may be independently filtered to yield five different spectral responses. The pinhole array is fabricated from a 1-cm thick tungsten block and is available with either straight pinholes or conical pinholes. Each pinhole within the array block is 250 μm in diameter. The five pinholes are splayed with respect to each other such that theymore » point to the same location in space, and hence present the same view of the target load at the Z facility. The fielding distance is 66 cm and the nominal image magnification is 0.374. Initial experimental results are shown to illustrate the performance of the camera.« less

  7. Questions Students Ask: The Red-Eye Effect.

    ERIC Educational Resources Information Center

    Physics Teacher, 1985

    1985-01-01

    Addresses the question of why a dog's eyes appear red and glow when a flash photograph is taken. Conditions for the red-eye effect, light paths involved, structure of the eye, and typical cameras and lenses are discussed. Also notes differences between the eyes of nocturnal animals and humans. (JN)

  8. An evaluation of red light camera (photo-red) enforcement programs in Virginia : a report in response to a request by Virginia's Secretary of Transportation.

    DOT National Transportation Integrated Search

    2005-01-01

    Red light running, which is defined as the act of a motorist entering an intersection after the traffic signal has turned red, caused almost 5,000 crashes in Virginia in 2003, resulting in at least 18 deaths and more than 3,800 injuries. In response ...

  9. An image-based array trigger for imaging atmospheric Cherenkov telescope arrays

    NASA Astrophysics Data System (ADS)

    Dickinson, Hugh; Krennrich, Frank; Weinstein, Amanda; Eisch, Jonathan; Byrum, Karen; Anderson, John; Drake, Gary

    2018-05-01

    It is anticipated that forthcoming, next generation, atmospheric Cherenkov telescope arrays will include a number of medium-sized telescopes that are constructed using a dual-mirror Schwarzschild-Couder configuration. These telescopes will sample a wide (8 °) field of view using a densely pixelated camera comprising over 104 individual readout channels. A readout frequency congruent with the expected single-telescope trigger rates would result in substantial data rates. To ameliorate these data rates, a novel, hardware-level Distributed Intelligent Array Trigger (DIAT) is envisioned. A copy of the DIAT operates autonomously at each telescope and uses reduced resolution imaging data from a limited subset of nearby telescopes to veto events prior to camera readout and any subsequent network transmission of camera data that is required for centralized storage or aggregation. We present the results of Monte-Carlo simulations that evaluate the efficacy of a "Parallax width" discriminator that can be used by the DIAT to efficiently distinguish between genuine gamma-ray initiated events and unwanted background events that are initiated by hadronic cosmic rays.

  10. Large, high resolution integrating TV sensor for astronomical appliations

    NASA Technical Reports Server (NTRS)

    Spitzer, L. J.

    1977-01-01

    A magnetically focused SEC tube developed for photometric applications is described. Efforts to design a 70 mm version of the tube which meets the ST f/24 camera requirements of the space telescope are discussed. The photometric accuracy of the 70 mm tube is executed to equal that of the previously developed 35 mm tube. The tube meets the criterion of 50 percent response at 20 cycles/mm in the central region of the format, and, with the removal of the remaining magnetic parts, this spatial frequency is expected over almost all of the format. Since the ST f/24 camera requires sensitivity in the red as well as the ultraviolet and visible spectra, attempts were made to develop tubes with this ability. It was found that it may be necessary to choose between red and u.v. sensitivity and tradeoff red sensitivity for low background. Results of environmental tests indicate no substantive problems in utilizing it in a flight camera system that will meet the space shuttle launch requirements.

  11. Growth Chambers on the International Space Station for Large Plants

    NASA Technical Reports Server (NTRS)

    Massa, G. D.; Wheeler, R. M.; Morrow, R. C.; Levine, H. G.

    2016-01-01

    The International Space Station (ISS) now has platforms for conducting research on horticultural plant species under LED lighting, and those capabilities continue to expand. The 'Veggie' vegetable production system was deployed to the ISS as an applied research platform for food production in space. Veggie is capable of growing a wide array of horticultural crops. It was designed for low power usage, low launch mass and stowage volume, and minimal crew time requirements. The Veggie flight hardware consists of a light cap containing red (630 nm), blue, (455 nm) and green (530 nm) LEDs. Interfacing with the light cap is an extendable bellows/baseplate for enclosing the plant canopy. A second large plant growth chamber, the Advanced Plant Habitat (APH), is will fly to the ISS in 2017. APH will be a fully controllable environment for high-quality plant physiological research. APH will control light (quality, level, and timing), temperature, CO2, relative humidity, and irrigation, while scrubbing any cabin or plant-derived ethylene and other volatile organic compounds. Additional capabilities include sensing of leaf temperature and root zone moisture, root zone temperature, and oxygen concentration. The light cap will have red (630 nm), blue (450 nm), green (525 nm), far red (730 nm) and broad spectrum white LEDs (4100K). There will be several internal cameras (visible and IR) to monitor and record plant growth and operations. Veggie and APH are available for research proposals.

  12. Water Ice on Pluto

    NASA Image and Video Library

    2015-10-16

    The Ralph instrument on NASA's New Horizons spacecraft detected water ice on Pluto's surface, picking up on the ice's near-infrared spectral characteristics. (See featured image from Oct. 8, 2015.) The middle panel shows a region west of Pluto's "heart" feature -- which the mission team calls Tombaugh Regio -- about 280 miles (450 kilometers) across. It combines visible imagery from Ralph's Multispectral Visible Imaging Camera (MVIC) with infrared spectroscopy from the Linear Etalon Imaging Spectral Array (LEISA). Areas with the strongest water ice spectral signature are highlighted in blue. Major outcrops of water ice occur in regions informally called Viking Terra, along Virgil Fossa west of Elliot crater, and in Baré Montes. Numerous smaller outcrops are associated with impact craters and valleys between mountains. In the lower left panel, LEISA spectra are shown for two regions indicated by cyan and magenta boxes. The white curve is a water ice model spectrum, showing similar features to the cyan spectrum. The magenta spectrum is dominated by methane ice absorptions. The lower right panel shows an MVIC enhanced color view of the region in the white box, with MVIC's blue, red and near-infrared filters displayed in blue, green and red channels, respectively. The regions showing the strongest water ice signature are associated with terrains that are actually a lighter shade of red. http://photojournal.jpl.nasa.gov/catalog/PIA20030

  13. Spectral quality affects disease development of three pathogens on hydroponically grown plants.

    PubMed

    Schuerger, A C; Brown, C S

    1997-02-01

    Plants were grown under light-emitting diode (LED) arrays with various spectra to determine the effects of light quality on the development of diseases caused by tomato mosaic virus (ToMV) on pepper (Capsicum annuum L.), powdery mildew [Sphaerotheca fuliginea (Schlectend:Fr.) Pollaci] on cucumber (Cucumis sativus L.), and bacterial wilt (Pseudomonas solanacearum Smith) on tomato (Lycopersicon esculentum Mill.). One LED (660) array supplied 99% red light at 660 nm (25 nm bandwidth at half-peak height) and 1% far-red light between 700 to 800 nm. A second LED (660/735) array supplied 83% red light at 660 nm and 17% far-red light at 735 nm (25 nm bandwidth at half-peak height). A third LED (660/BF) array supplied 98% red light at 660 nm, 1% blue light (BF) between 350 to 550 nm, and 1% far-red light between 700 to 800 nm. Control plants were grown under broad-spectrum metal halide (MH) lamps. Plants were grown at a mean photon flux (300 to 800 nm) of 330 micromoles m-2 s-1 under a 12-h day/night photoperiod. Spectral quality affected each pathosystem differently. In the ToMV/pepper pathosystem, disease symptoms developed slower and were less severe in plants grown under light sources that contained blue and UV-A wavelengths (MH and 660/BF treatments) compared to plants grown under light sources that lacked blue and UV-A wavelengths (660 and 660/735 LED arrays). In contrast, the number of colonies per leaf was highest and the mean colony diameters of S. fuliginea on cucumber plants were largest on leaves grown under the MH lamp (highest amount of blue and UV-A light) and least on leaves grown under the 660 LED array (no blue or UV-A light). The addition of far-red irradiation to the primary light source in the 660/735 LED array increased the colony counts per leaf in the S. fuliginea/cucumber pathosystem compared to the red-only (660) LED array. In the P. solanacearum/tomato pathosystem, disease symptoms were less severe in plants grown under the 660 LED array, but the effects of spectral quality on disease development when other wavelengths were included in the light source (MH-, 660/BF-, and 660/735-grown plants) were equivocal. These results demonstrate that spectral quality may be useful as a component of an integrated pest management program for future space-based controlled ecological life support systems.

  14. Spectral quality affects disease development of three pathogens on hydroponically grown plants

    NASA Technical Reports Server (NTRS)

    Schuerger, A. C.; Brown, C. S.; Sager, J. C. (Principal Investigator)

    1997-01-01

    Plants were grown under light-emitting diode (LED) arrays with various spectra to determine the effects of light quality on the development of diseases caused by tomato mosaic virus (ToMV) on pepper (Capsicum annuum L.), powdery mildew [Sphaerotheca fuliginea (Schlectend:Fr.) Pollaci] on cucumber (Cucumis sativus L.), and bacterial wilt (Pseudomonas solanacearum Smith) on tomato (Lycopersicon esculentum Mill.). One LED (660) array supplied 99% red light at 660 nm (25 nm bandwidth at half-peak height) and 1% far-red light between 700 to 800 nm. A second LED (660/735) array supplied 83% red light at 660 nm and 17% far-red light at 735 nm (25 nm bandwidth at half-peak height). A third LED (660/BF) array supplied 98% red light at 660 nm, 1% blue light (BF) between 350 to 550 nm, and 1% far-red light between 700 to 800 nm. Control plants were grown under broad-spectrum metal halide (MH) lamps. Plants were grown at a mean photon flux (300 to 800 nm) of 330 micromoles m-2 s-1 under a 12-h day/night photoperiod. Spectral quality affected each pathosystem differently. In the ToMV/pepper pathosystem, disease symptoms developed slower and were less severe in plants grown under light sources that contained blue and UV-A wavelengths (MH and 660/BF treatments) compared to plants grown under light sources that lacked blue and UV-A wavelengths (660 and 660/735 LED arrays). In contrast, the number of colonies per leaf was highest and the mean colony diameters of S. fuliginea on cucumber plants were largest on leaves grown under the MH lamp (highest amount of blue and UV-A light) and least on leaves grown under the 660 LED array (no blue or UV-A light). The addition of far-red irradiation to the primary light source in the 660/735 LED array increased the colony counts per leaf in the S. fuliginea/cucumber pathosystem compared to the red-only (660) LED array. In the P. solanacearum/tomato pathosystem, disease symptoms were less severe in plants grown under the 660 LED array, but the effects of spectral quality on disease development when other wavelengths were included in the light source (MH-, 660/BF-, and 660/735-grown plants) were equivocal. These results demonstrate that spectral quality may be useful as a component of an integrated pest management program for future space-based controlled ecological life support systems.

  15. Design and Expected Performance of GISMO-2, a Two Color Millimeter Camera for the IRAM 30 m Telescope

    NASA Technical Reports Server (NTRS)

    Staguhn, Johannes G.; Benford, Dominic J.; Dwek, Eli; Hilton, Gene; Fixsen, Dale J.; Irwin, Kent; Jhabvala, Christine; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; hide

    2014-01-01

    We present the main design features for the GISMO-2 bolometer camera, which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISMO-2 will operate simultaneously in the 1 and 2 mm atmospherical windows. The 1 mm channel uses a 32 × 40 TES-based backshort under grid (BUG) bolometer array, the 2 mm channel operates with a 16 × 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISMO-2 was strongly influenced by our experience with the GISMO 2mm bolometer camera, which is successfully operating at the 30 m telescope. GISMO is accessible to the astronomical community through the regularIRAMcall for proposals.

  16. HST Solar Arrays photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-002 (4 Dec 1993) --- This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed from inside Endeavour's cabin with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view features the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  17. HST Solar Arrays photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-003 (4 Dec 1993) --- This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  18. Opportunistic traffic sensing using existing video sources (phase II).

    DOT National Transportation Integrated Search

    2017-02-01

    The purpose of the project reported on here was to investigate methods for automatic traffic sensing using traffic surveillance : cameras, red light cameras, and other permanent and pre-existing video sources. Success in this direction would potentia...

  19. Design and Fabrication of Two-Dimensional Semiconducting Bolometer Arrays for the High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC-II)

    NASA Technical Reports Server (NTRS)

    Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. Darren; Harper, D. Al; Jhabvala, Murzy D.; hide

    2002-01-01

    The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC 11) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC "Pop-Up" Detectors (PUD's) use a unique folding technique to enable a 12 x 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar(Registered Trademark) suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 x 32-element array. Engineering results from the first light run of SHARC II at the CalTech Submillimeter Observatory (CSO) are presented.

  20. Design and Fabrication of Two-Dimensional Semiconducting Bolometer Arrays for the High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC-II)

    NASA Technical Reports Server (NTRS)

    Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. Darren; Harper, D. Al; Jhabvala, Murzy D.

    2002-01-01

    The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC II) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC 'Pop-up' Detectors (PUD's) use a unique folding technique to enable a 12 x 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar(trademark) suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 x 32-element array. Engineering results from the first light run of SHARC II at the Caltech Submillimeter Observatory (CSO) are presented.

  1. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  2. Status of a Novel 4-Band Submm/mm Camera for the Caltech Submillimeter Observatory

    NASA Astrophysics Data System (ADS)

    Noroozian, Omid; Day, P.; Glenn, J.; Golwala, S.; Kumar, S.; LeDuc, H. G.; Mazin, B.; Nguyen, H. T.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Zmuidzinas, J.

    2007-12-01

    Submillimeter observations are important to the understanding of galaxy formation and evolution. Determination of the spectral energy distribution in the millimeter and submillimeter regimes allows important and powerful diagnostics. To this end, we are undertaking the construction of a 4-band (750, 850, 1100, 1300 microns) 8-arcminute field of view camera for the Caltech Submillimeter Observatory. The focal plane will make use of three novel technologies: photolithographic phased array antennae, on-chip band-pass filters, and microwave kinetic inductance detectors (MKID). The phased array antenna design obviates beam-defining feed horns. On-chip band-pass filters eliminate band-defining metal-mesh filters. Together, the antennae and filters enable each spatial pixel to observe in all four bands simultaneously. MKIDs are highly multiplexable background-limited photon detectors. Readout of the MKID array will be done with software-defined radio (See poster by Max-Moerbeck et al.). This camera will provide an order-of-magnitude larger mapping speed than existing instruments and will be comparable to SCUBA 2 in terms of the detection rate for dusty sources, but complementary to SCUBA 2 in terms of wavelength coverage. We present results from an engineering run with a demonstration array, the baseline design for the science array, and the status of instrument design, construction, and testing. We anticipate the camera will be available at the CSO in 2010. This work has been supported by NASA ROSES APRA grants NNG06GG16G and NNG06GC71G, the NASA JPL Research and Technology Development Program, and the Gordon and Betty Moore Foundation.

  3. Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.

    2014-10-01

    A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.

  4. Light from Red-Hot Planet

    NASA Technical Reports Server (NTRS)

    2009-01-01

    This figure charts 30 hours of observations taken by NASA's Spitzer Space Telescope of a strongly irradiated exoplanet (an planet orbiting a star beyond our own). Spitzer measured changes in the planet's heat, or infrared light.

    The lower graph shows precise measurements of infrared light with a wavelength of 8 microns coming from the HD 80606 stellar system. The system consists of a sun-like star and a planetary companion on an extremely eccentric, comet-like orbit. The geometry of the planet-star encounter is shown in the upper part of the figure.

    As the planet swung through its closest approach to the star, the Spitzer observations indicated that it experienced very rapid heating (as shown by the red curve). Just before close approach, the planet was eclipsed by the star as seen from Earth, allowing astronomers to determine the amount of energy coming from the planet in comparison to the amount coming from the star.

    The observations were made in Nov. of 2007, using Spitzer's infrared array camera. They represent a significant first for astronomers, opening the door to studying changes in atmospheric conditions of planets far beyond our own solar system.

  5. KSC-01pp1802

    NASA Image and Video Library

    2001-12-01

    KENNEDY SPACE CENTER, Fla. - STS-109 Mission Specialist Richard Lennehan (left) and Payload Commander John Grunsfeld get a feel for tools and equipment that will be used on the mission. The crew is at KSC to take part in Crew Equipment Interface Test activities that include familiarization with the orbiter and equipment. The goal of the mission is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the Advanced Camera for Surveys, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  6. Congo red agar, a differential medium for Aeromonas salmonicida, detects the presence of the cell surface protein array involved in virulence.

    PubMed Central

    Ishiguro, E E; Ainsworth, T; Trust, T J; Kay, W W

    1985-01-01

    Strains of the fish pathogen Aeromonas salmonicida which possess the cell surface protein array known as the A-layer (A+) involved in virulence formed deep red colonies on tryptic soy agar containing 30 micrograms of Congo red per ml. These were readily distinguished from colorless or light orange colonies of avirulent mutants lacking A-layer (A-). The utility of Congo red agar for quantifying A+ and A- cells in the routine assessment of culture virulence was demonstrated. Intact A+ cells adsorbed Congo red, whereas A- mutants did not bind Congo red unless first permeabilized with EDTA. The dye-binding component of A+ cells was shown to be the 50,000-Mr A-protein component of the surface array. Purified A-protein avidly bound Congo red at a dye-to-protein molar ratio of about 30 by a nonspecific hydrophobic mechanism enhanced by high salt concentrations. Neither A+ nor A- cells adsorbed to Congo red-Sepharose columns at low salt concentrations. On the other hand, A+ (but not A-) cells were avidly bound at high salt concentrations. Images PMID:3934141

  7. Kleptoparasitic behavior and species richness at Mt. Graham red squirrel middens

    Treesearch

    Andrew J. Edelman; John L. Koprowski; Jennifer L. Edelman

    2005-01-01

    We used remote photography to assess the frequency of inter- and intra-specific kleptoparasitism and species richness at Mt. Graham red squirrel (Tamiasciurus hudsonicus grahamensis) middens. Remote cameras and conifer cones were placed at occupied and unoccupied middens, and random sites. Species richness of small mammals was higher at red squirrel...

  8. Avian nestling predation by endangered Mount Graham red squirrel

    Treesearch

    Claire A. Zugmeyer; John L. Koprowski

    2007-01-01

    Studies using artificial nests or remote cameras have documented avian predation by red squirrels (Tamiasciurus hudsonicus). Although several direct observations of avian predation events are known in the northern range of the red squirrel distribution, no accounts have been reported in the southern portion. We observed predation upon a hermit thrush...

  9. Model of an optical system's influence on sensitivity of microbolometric focal plane array

    NASA Astrophysics Data System (ADS)

    Gogler, Sławomir; Bieszczad, Grzegorz; Zarzycka, Alicja; Szymańska, Magdalena; Sosnowski, Tomasz

    2012-10-01

    Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. The detectors used in thermal camera are illuminated by infrared radiation transmitted through a specialized optical system. Each optical system used influences irradiation distribution across an sensor array. In the article a model describing irradiation distribution across an array sensor working with an optical system used in the calibration set-up has been proposed. In the said method optical and geometrical considerations of the array set-up have been taken into account. By means of Monte-Carlo simulation, large number of rays has been traced to the sensor plane, what allowed to determine the irradiation distribution across the image plane for different aperture limiting configurations. Simulated results have been confronted with proposed analytical expression. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.

  10. Multispectral data processing from unmanned aerial vehicles: application in precision agriculture using different sensors and platforms

    NASA Astrophysics Data System (ADS)

    Piermattei, Livia; Bozzi, Carlo Alberto; Mancini, Adriano; Tassetti, Anna Nora; Karel, Wilfried; Pfeifer, Norbert

    2017-04-01

    Unmanned aerial vehicles (UAVs) in combination with consumer grade cameras have become standard tools for photogrammetric applications and surveying. The recent generation of multispectral, cost-efficient and lightweight cameras has fostered a breakthrough in the practical application of UAVs for precision agriculture. For this application, multispectral cameras typically use Green, Red, Red-Edge (RE) and Near Infrared (NIR) wavebands to capture both visible and invisible images of crops and vegetation. These bands are very effective for deriving characteristics like soil productivity, plant health and overall growth. However, the quality of results is affected by the sensor architecture, the spatial and spectral resolutions, the pattern of image collection, and the processing of the multispectral images. In particular, collecting data with multiple sensors requires an accurate spatial co-registration of the various UAV image datasets. Multispectral processed data in precision agriculture are mainly presented as orthorectified mosaics used to export information maps and vegetation indices. This work aims to investigate the acquisition parameters and processing approaches of this new type of image data in order to generate orthoimages using different sensors and UAV platforms. Within our experimental area we placed a grid of artificial targets, whose position was determined with differential global positioning system (dGPS) measurements. Targets were used as ground control points to georeference the images and as checkpoints to verify the accuracy of the georeferenced mosaics. The primary aim is to present a method for the spatial co-registration of visible, Red-Edge, and NIR image sets. To demonstrate the applicability and accuracy of our methodology, multi-sensor datasets were collected over the same area and approximately at the same time using the fixed-wing UAV senseFly "eBee". The images were acquired with the camera Canon S110 RGB, the multispectral cameras Canon S110 NIR and S110 RE and with the multi-camera system Parrot Sequoia, which is composed of single-band cameras (Green, Red, Red Edge, NIR and RGB). Imagery from each sensor was georeferenced and mosaicked with the commercial software Agisoft PhotoScan Pro and different approaches for image orientation were compared. To assess the overall spatial accuracy of each dataset the root mean square error was computed between check point coordinates measured with dGPS and coordinates retrieved from georeferenced image mosaics. Additionally, image datasets from different UAV platforms (i.e. DJI Phantom 4Pro, DJI Phantom 3 professional, and DJI Inspire 1 Pro) were acquired over the same area and the spatial accuracy of the orthoimages was evaluated.

  11. Near infra-red astronomy with adaptive optics and laser guide stars at the Keck Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, C.E.; Gavel, D.T.; Olivier, S.S.

    1995-08-03

    A laser guide star adaptive optics system is being built for the W. M. Keck Observatory`s 10-meter Keck II telescope. Two new near infra-red instruments will be used with this system: a high-resolution camera (NIRC 2) and an echelle spectrometer (NIRSPEC). The authors describe the expected capabilities of these instruments for high-resolution astronomy, using adaptive optics with either a natural star or a sodium-layer laser guide star as a reference. They compare the expected performance of these planned Keck adaptive optics instruments with that predicted for the NICMOS near infra-red camera, which is scheduled to be installed on the Hubblemore » Space Telescope in 1997.« less

  12. Space infrared telescope facility wide field and diffraction limited array camera (IRAC)

    NASA Technical Reports Server (NTRS)

    Fazio, Giovanni G.

    1988-01-01

    The wide-field and diffraction limited array camera (IRAC) is capable of two-dimensional photometry in either a wide-field or diffraction-limited mode over the wavelength range from 2 to 30 microns with a possible extension to 120 microns. A low-doped indium antimonide detector was developed for 1.8 to 5.0 microns, detectors were tested and optimized for the entire 1.8 to 30 micron range, beamsplitters were developed and tested for the 1.8 to 30 micron range, and tradeoff studies of the camera's optical system performed. Data are presented on the performance of InSb, Si:In, Si:Ga, and Si:Sb array detectors bumpbonded to a multiplexed CMOS readout chip of the source-follower type at SIRTF operating backgrounds (equal to or less than 1 x 10 to the 8th ph/sq cm/sec) and temperature (4 to 12 K). Some results at higher temperatures are also presented for comparison to SIRTF temperature results. Data are also presented on the performance of IRAC beamsplitters at room temperature at both 0 and 45 deg angle of incidence and on the performance of the all-reflecting optical system baselined for the camera.

  13. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  14. KSC-01pp1760

    NASA Image and Video Library

    2001-11-29

    KENNEDY SPACE CENTER, Fla. -- Fully unwrapped, the Advanced Camera for Surveys, which is suspended by an overhead crane, is checked over by workers. Part of the payload on the Hubble Space Telescope Servicing Mission, STS-109, the ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. Tasks for the mission include replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  15. Lens and Camera Arrays for Sky Surveys and Space Surveillance

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Cox, D.; McGraw, J.; Zimmer, P.

    2016-09-01

    In recent years, a number of sky survey projects have chosen to use arrays of commercial cameras coupled with commercial photographic lenses to enable low-cost, wide-area observation. Projects such as SuperWASP, FAVOR, RAPTOR, Lotis, PANOPTES, and DragonFly rely on multiple cameras with commercial lenses to image wide areas of the sky each night. The sensors are usually commercial astronomical charge coupled devices (CCDs) or digital single reflex (DSLR) cameras, while the lenses are large-aperture, highend consumer items intended for general photography. While much of this equipment is very capable and relatively inexpensive, this approach comes with a number of significant limitations that reduce sensitivity and overall utility of the image data. The most frequently encountered limitations include lens vignetting, narrow spectral bandpass, and a relatively large point spread function. Understanding these limits helps to assess the utility of the data, and identify areas where advanced optical designs could significantly improve survey performance.

  16. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Treesearch

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  17. Red light running camera assessment.

    DOT National Transportation Integrated Search

    2011-04-01

    In the 2004-2007 period, the Mission Street SE and 25th Street SE intersection in Salem, Oregon showed relatively few crashes attributable to red light running (RLR) but, since a high number of RLR violations were observed, the intersection was ident...

  18. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  19. High energy X-ray pinhole imaging at the Z facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McPherson, L. Armon; Ampleford, David J., E-mail: damplef@sandia.gov; Coverdale, Christine A.

    A new high photon energy (hν > 15 keV) time-integrated pinhole camera (TIPC) has been developed as a diagnostic instrument at the Z facility. This camera employs five pinholes in a linear array for recording five images at once onto an image plate detector. Each pinhole may be independently filtered to yield five different spectral responses. The pinhole array is fabricated from a 1-cm thick tungsten block and is available with either straight pinholes or conical pinholes. Each pinhole within the array block is 250 μm in diameter. The five pinholes are splayed with respect to each other such thatmore » they point to the same location in space, and hence present the same view of the radiation source at the Z facility. The fielding distance from the radiation source is 66 cm and the nominal image magnification is 0.374. Initial experimental results from TIPC are shown to illustrate the performance of the camera.« less

  20. Planetary Building Blocks Found in Surprising Place

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    This graph of data from NASA's Spitzer Space Telescope shows that an extraordinarily low-mass brown dwarf, or 'failed star,' is circled by a disc of planet-building dust. The brown dwarf, called OTS 44, is only 15 times the mass of Jupiter, making it the smallest known brown dwarf to host a planet-forming disc.

    Spitzer was able to see this unusual disc by measuring its infrared brightness. Whereas a brown dwarf without a disc (red dashed line) radiates infrared light at shorter wavelengths, a brown dwarf with a disc (orange line) gives off excess infrared light at longer wavelengths. This surplus light comes from the disc itself and is represented here as a yellow dotted line. Actual data points from observations of OTS 44 are indicated with orange dots. These data were acquired using Spitzer's infrared array camera.

  1. Galaxies Collide to Create Hot, Huge Galaxy

    NASA Technical Reports Server (NTRS)

    2009-01-01

    This image of a pair of colliding galaxies called NGC 6240 shows them in a rare, short-lived phase of their evolution just before they merge into a single, larger galaxy. The prolonged, violent collision has drastically altered the appearance of both galaxies and created huge amounts of heat turning NGC 6240 into an 'infrared luminous' active galaxy.

    A rich variety of active galaxies, with different shapes, luminosities and radiation profiles exist. These galaxies may be related astronomers have suspected that they may represent an evolutionary sequence. By catching different galaxies in different stages of merging, a story emerges as one type of active galaxy changes into another. NGC 6240 provides an important 'missing link' in this process.

    This image was created from combined data from the infrared array camera of NASA's Spitzer Space Telescope at 3.6 and 8.0 microns (red) and visible light from NASA's Hubble Space Telescope (green and blue).

  2. SIRTF Tools for DIRT

    NASA Astrophysics Data System (ADS)

    Pound, M. W.; Wolfire, M. G.; Amarnath, N. S.

    2003-12-01

    The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS, located at http://dustem.astro.umd.edu) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 5 years. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. We are adding new functionality to DIRT to support new missions like SIRTF and SOFIA. A new Instrument module allows for plotting of the model points convolved with the spatial and spectral responses of the selected instrument. This lets users better fit data from specific instruments. Currently, we have implemented modules for the Infrared Array Camera (IRAC) and Multiband Imaging Photometer (MIPS) on SIRTF.

  3. Monitoring Telluric Water Absorption with CAMAL

    NASA Astrophysics Data System (ADS)

    Baker, Ashley; Blake, Cullen; Sliski, David

    2017-01-01

    Ground-based observations are severely limited by telluric water vapor absorption features, which are highly variable in time and significantly complicate both spectroscopy and photometry in the near-infrared (NIR). To achieve the stability required to study Earth-sized exoplanets, monitoring the precipitable water vapor (PWV) becomes necessary to mitigate the impact of telluric lines on radial velocity measurements and transit light curves. To address this issue, we present the Camera for the Automatic Monitoring of Atmospheric Lines (CAMAL), a stand-alone, inexpensive 6-inch aperture telescope dedicated to measuring PWV at the Whipple Observatory. CAMAL utilizes three NIR narrowband filters to trace the amount of atmospheric water vapor affecting simultaneous observations with the MINiature Exoplanet Radial Velocity Array (MINERVA) and MINERVA-Red telescopes. We present the current design of CAMAL, discuss our calibration methods, and show PWV measurements taken with CAMAL compared to those of a nearby GPS water vapor monitor.

  4. VizieR Online Data Catalog: Bright white dwarfs IRAC photometry (Barber+, 2016)

    NASA Astrophysics Data System (ADS)

    Barber, S. D.; Belardi, C.; Kilic, M.; Gianninas, A.

    2017-07-01

    Mid-infrared photometry, like the 3.4 and 4.6um photometry available from WISE, is necessary to detect emission from a debris disc orbiting a WD. WISE, however, has poor spatial resolution (6 arcsec beam size) and is known to have a 75 per cent false positive rate for detecting dusty discs around WDs fainter than 14.5(15) mag in W1(W2) (Barber et al. (2014ApJ...786...77B). To mitigate this high rate of spurious detections, we compile higher spatial resolution archival data from the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope. We query the Spitzer Heritage Archive for any observations within 10 arcsec of the 1265 WDs from Gianninas et al. (2011, Cat. J/ApJ/743/138) and find 907 Astronomical Observing Requests (AORs) for 381 WDs. (1 data file).

  5. Camera array based light field microscopy

    PubMed Central

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-01-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  6. Simulation Study of the Localization of a Near-Surface Crack Using an Air-Coupled Ultrasonic Sensor Array

    PubMed Central

    Delrue, Steven; Aleshin, Vladislav; Sørensen, Mikael; De Lathauwer, Lieven

    2017-01-01

    The importance of Non-Destructive Testing (NDT) to check the integrity of materials in different fields of industry has increased significantly in recent years. Actually, industry demands NDT methods that allow fast (preferably non-contact) detection and localization of early-stage defects with easy-to-interpret results, so that even a non-expert field worker can carry out the testing. The main challenge is to combine as many of these requirements into one single technique. The concept of acoustic cameras, developed for low frequency NDT, meets most of the above-mentioned requirements. These cameras make use of an array of microphones to visualize noise sources by estimating the Direction Of Arrival (DOA) of the impinging sound waves. Until now, however, because of limitations in the frequency range and the lack of integrated nonlinear post-processing, acoustic camera systems have never been used for the localization of incipient damage. The goal of the current paper is to numerically investigate the capabilities of locating incipient damage by measuring the nonlinear airborne emission of the defect using a non-contact ultrasonic sensor array. We will consider a simple case of a sample with a single near-surface crack and prove that after efficient excitation of the defect sample, the nonlinear defect responses can be detected by a uniform linear sensor array. These responses are then used to determine the location of the defect by means of three different DOA algorithms. The results obtained in this study can be considered as a first step towards the development of a nonlinear ultrasonic camera system, comprising the ultrasonic sensor array as the hardware and nonlinear post-processing and source localization software. PMID:28441738

  7. 15 CFR 742.4 - National security.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...

  8. 15 CFR 742.4 - National security.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...

  9. 15 CFR 742.4 - National security.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...

  10. 15 CFR 742.4 - National security.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...

  11. The Atacama Cosmology Telescope: Instrument

    NASA Astrophysics Data System (ADS)

    Thornton, Robert J.; Atacama Cosmology Telescope Team

    2010-01-01

    The 6-meter Atacama Cosmology Telescope (ACT) is making detailed maps of the Cosmic Microwave Background at Cerro Toco in northern Chile. In this talk, I focus on the design and operation of the telescope and its commissioning instrument, the Millimeter Bolometer Array Camera. The camera contains three independent sets of optics that operate at 148 GHz, 217 GHz, and 277 GHz with arcminute resolution, each of which couples to a 1024-element array of Transition Edge Sensor (TES) bolometers. I will report on the camera performance, including the beam patterns, optical efficiencies, and detector sensitivities. Under development for ACT is a new polarimeter based on feedhorn-coupled TES devices that have improved sensitivity and are planned to operate at 0.1 K.

  12. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  13. The software architecture of the camera for the ASTRI SST-2M prototype for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Sangiorgi, Pierluca; Capalbi, Milvia; Gimenes, Renato; La Rosa, Giovanni; Russo, Francesco; Segreto, Alberto; Sottile, Giuseppe; Catalano, Osvaldo

    2016-07-01

    The purpose of this contribution is to present the current status of the software architecture of the ASTRI SST-2M Cherenkov Camera. The ASTRI SST-2M telescope is an end-to-end prototype for the Small Size Telescope of the Cherenkov Telescope Array. The ASTRI camera is an innovative instrument based on SiPM detectors and has several internal hardware components. In this contribution we will give a brief description of the hardware components of the camera of the ASTRI SST-2M prototype and of their interconnections. Then we will present the outcome of the software architectural design process that we carried out in order to identify the main structural components of the camera software system and the relationships among them. We will analyze the architectural model that describes how the camera software is organized as a set of communicating blocks. Finally, we will show where these blocks are deployed in the hardware components and how they interact. We will describe in some detail, the physical communication ports and external ancillary devices management, the high precision time-tag management, the fast data collection and the fast data exchange between different camera subsystems, and the interfacing with the external systems.

  14. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  15. To brake or to accelerate? Safety effects of combined speed and red light cameras.

    PubMed

    De Pauw, Ellen; Daniels, Stijn; Brijs, Tom; Hermans, Elke; Wets, Geert

    2014-09-01

    The present study evaluates the traffic safety effect of combined speed and red light cameras at 253 signalized intersections in Flanders, Belgium that were installed between 2002 and 2007. The adopted approach is a before-and-after study with control for the trend. The analyses showed a non-significant increase of 5% in the number of injury crashes. An almost significant decrease of 14% was found for the more severe crashes. The number of rear-end crashes turned out to have increased significantly (+44%), whereas a non-significant decrease (-6%) was found in the number of side crashes. The decrease for the severe crashes was mainly attributable to the effect on side crashes, for which a significant decrease of 24% was found. It is concluded that combined speed and red light cameras have a favorable effect on traffic safety, in particular on severe crashes. However, future research should examine the circumstances of rear-end crashes and how this increase can be managed. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.

  16. ATTICA family of thermal cameras in submarine applications

    NASA Astrophysics Data System (ADS)

    Kuerbitz, Gunther; Fritze, Joerg; Hoefft, Jens-Rainer; Ruf, Berthold

    2001-10-01

    Optronics Mast Systems (US: Photonics Mast Systems) are electro-optical devices which enable a submarine crew to observe the scenery above water during dive. Unlike classical submarine periscopes they are non-hull-penetrating and therefore have no direct viewing capability. Typically they have electro-optical cameras both for the visual and for an IR spectral band with panoramic view and a stabilized line of sight. They can optionally be equipped with laser range- finders, antennas, etc. The brand name ATTICA (Advanced Two- dimensional Thermal Imager with CMOS-Array) characterizes a family of thermal cameras using focal-plane-array (FPA) detectors which can be tailored to a variety of requirements. The modular design of the ATTICA components allows the use of various detectors (InSb, CMT 3...5 μm , CMT 7...11 μm ) for specific applications. By means of a microscanner ATTICA cameras achieve full standard TV resolution using detectors with only 288 X 384 (US:240 X 320) detector elements. A typical requirement for Optronics-Mast Systems is a Quick- Look-Around capability. For FPA cameras this implies the need for a 'descan' module which can be incorporated in the ATTICA cameras without complications.

  17. Development of an all-in-one gamma camera/CCD system for safeguard verification

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo

    2014-12-01

    For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.

  18. Stellar Populations of Lyα Emitters at z ~ 6-7: Constraints on the Escape Fraction of Ionizing Photons from Galaxy Building Blocks

    NASA Astrophysics Data System (ADS)

    Ono, Yoshiaki; Ouchi, Masami; Shimasaku, Kazuhiro; Dunlop, James; Farrah, Duncan; McLure, Ross; Okamura, Sadanori

    2010-12-01

    We investigate the stellar populations of Lyα emitters (LAEs) at z = 5.7 and 6.6 in a 0.65 deg2 sky of the Subaru/XMM-Newton Deep Survey (SXDS) Field, using deep images taken with the Subaru/Suprime-Cam, United Kingdom Infrared Telescope/Wide Field Infrared Camera, and Spitzer/Infrared Array Camera (IRAC). We produce stacked multiband images at each redshift from 165 (z = 5.7) and 91 (z = 6.6) IRAC-undetected objects to derive typical spectral energy distributions (SEDs) of z ~ 6-7 LAEs for the first time. The stacked LAEs have as blue UV continua as the Hubble Space Telescope (HST)/Wide Field Camera 3 (WFC3) z-dropout galaxies of similar M UV, with a spectral slope β ~ -3, but at the same time they have red UV-to-optical colors with detection in the 3.6 μm band. Using SED fitting we find that the stacked LAEs have low stellar masses of ~(3-10) × 107 M sun, very young ages of ~1-3 Myr, negligible dust extinction, and strong nebular emission from the ionized interstellar medium, although the z = 6.6 object is fitted similarly well with high-mass models without nebular emission; inclusion of nebular emission reproduces the red UV-to-optical colors while keeping the UV colors sufficiently blue. We infer that typical LAEs at z ~ 6-7 are building blocks of galaxies seen at lower redshifts. We find a tentative decrease in the Lyα escape fraction from z = 5.7 to 6.6, which may imply an increase in the intergalactic medium neutral fraction. From the minimum contribution of nebular emission required to fit the observed SEDs, we place an upper limit on the escape fraction of ionizing photons of f ion esc ~ 0.6 at z = 5.7 and ~0.9 at z = 6.6. We also compare the stellar populations of our LAEs with those of stacked HST/WFC3 z-dropout galaxies. Based on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  19. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the USGS Grand Canyon Monitoring and Research Center.

  20. Strategy for the development of a smart NDVI camera system for outdoor plant detection and agricultural embedded systems.

    PubMed

    Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe

    2013-01-24

    The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed.

  1. Strategy for the Development of a Smart NDVI Camera System for Outdoor Plant Detection and Agricultural Embedded Systems

    PubMed Central

    Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe

    2013-01-01

    The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed. PMID:23348037

  2. Proportional counter radiation camera

    DOEpatents

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  3. Using Meta Analysis Techniques to Assess the Safety Effect of Red Light Running Cameras

    DOT National Transportation Integrated Search

    2002-02-01

    Automated enforcement programs, including automated systems that are used to enforce red light running violations, have recently come under scrutiny regarding their value in terms of improving safety, their primary purpose. One of the major hurdles t...

  4. 15-micro-m 128 x 128 GaAs/Al(x)Ga(1-x) As Quantum Well Infrared Photodetector Focal Plane Array Camera

    NASA Technical Reports Server (NTRS)

    Gunapala, Sarath D.; Park, Jin S.; Sarusi, Gabby; Lin, True-Lon; Liu, John K.; Maker, Paul D.; Muller, Richard E.; Shott, Craig A.; Hoelter, Ted

    1997-01-01

    In this paper, we discuss the development of very sensitive, very long wavelength infrared GaAs/Al(x)Ga(1-x)As quantum well infrared photodetectors (QWIP's) based on bound-to-quasi-bound intersubband transition, fabrication of random reflectors for efficient light coupling, and the demonstration of a 15 micro-m cutoff 128 x 128 focal plane array imaging camera. Excellent imagery, with a noise equivalent differential temperature (N E(delta T)) of 30 mK has been achieved.

  5. Calibrating Images from the MINERVA Cameras

    NASA Astrophysics Data System (ADS)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  6. A Stellar Ripple

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This false-color composite image shows the Cartwheel galaxy as seen by the Galaxy Evolution Explorer's far ultraviolet detector (blue); the Hubble Space Telescope's wide field and planetary camera 2 in B-band visible light (green); the Spitzer Space Telescope's infrared array camera at 8 microns (red); and the Chandra X-ray Observatory's advanced CCD imaging spectrometer-S array instrument (purple).

    Approximately 100 million years ago, a smaller galaxy plunged through the heart of Cartwheel galaxy, creating ripples of brief star formation. In this image, the first ripple appears as an ultraviolet-bright blue outer ring. The blue outer ring is so powerful in the Galaxy Evolution Explorer observations that it indicates the Cartwheel is one of the most powerful UV-emitting galaxies in the nearby universe. The blue color reveals to astronomers that associations of stars 5 to 20 times as massive as our sun are forming in this region. The clumps of pink along the outer blue ring are regions where both X-rays and ultraviolet radiation are superimposed in the image. These X-ray point sources are very likely collections of binary star systems containing a blackhole (called massive X-ray binary systems). The X-ray sources seem to cluster around optical/ultraviolet-bright supermassive star clusters.

    The yellow-orange inner ring and nucleus at the center of the galaxy result from the combination of visible and infrared light, which is stronger towards the center. This region of the galaxy represents the second ripple, or ring wave, created in the collision, but has much less star formation activity than the first (outer) ring wave. The wisps of red spread throughout the interior of the galaxy are organic molecules that have been illuminated by nearby low-level star formation. Meanwhile, the tints of green are less massive, older visible-light stars.

    Although astronomers have not identified exactly which galaxy collided with the Cartwheel, two of three candidate galaxies can be seen in this image to the bottom left of the ring, one as a neon blob and the other as a green spiral.

    Previously, scientists believed the ring marked the outermost edge of the galaxy, but the latest GALEX observations detect a faint disk, not visible in this image, that extends to twice the diameter of the ring.

  7. Growth Chambers on the International Space Station for Large Plants

    NASA Technical Reports Server (NTRS)

    Massa, Gioia D.; Wheeler, Raymond M.; Morrow, Robert C.; Levine, Howard G.

    2016-01-01

    The International Space Station (ISS) now has platforms for conducting research on horticultural plant species under LED (Light Emitting Diodes) lighting, and those capabilities continue to expand. The Veggie vegetable production system was deployed to the ISS as an applied research platform for food production in space. Veggie is capable of growing a wide array of horticultural crops. It was designed for low power usage, low launch mass and stowage volume, and minimal crew time requirements. The Veggie flight hardware consists of a light cap containing red (630 nanometers), blue, (455 nanometers) and green (530 nanometers) LEDs. Interfacing with the light cap is an extendable bellowsbaseplate for enclosing the plant canopy. A second large plant growth chamber, the Advanced Plant Habitat (APH), is will fly to the ISS in 2017. APH will be a fully controllable environment for high-quality plant physiological research. APH will control light (quality, level, and timing), temperature, CO2, relative humidity, and irrigation, while scrubbing any cabin or plant-derived ethylene and other volatile organic compounds. Additional capabilities include sensing of leaf temperature and root zone moisture, root zone temperature, and oxygen concentration. The light cap will have red (630 nm), blue (450 nm), green (525 nm), far red (730 nm) and broad spectrum white LEDs (4100K). There will be several internal cameras (visible and IR) to monitor and record plant growth and operations. Veggie and APH are available for research proposals.

  8. Ring of Stellar Death

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This false-color image from NASA's Spitzer Space Telescope shows a dying star (center) surrounded by a cloud of glowing gas and dust. Thanks to Spitzer's dust-piercing infrared eyes, the new image also highlights a never-before-seen feature -- a giant ring of material (red) slightly offset from the cloud's core. This clumpy ring consists of material that was expelled from the aging star.

    The star and its cloud halo constitute a 'planetary nebula' called NGC 246. When a star like our own Sun begins to run out of fuel, its core shrinks and heats up, boiling off the star's outer layers. Leftover material shoots outward, expanding in shells around the star. This ejected material is then bombarded with ultraviolet light from the central star's fiery surface, producing huge, glowing clouds -- planetary nebulas -- that look like giant jellyfish in space.

    In this image, the expelled gases appear green, and the ring of expelled material appears red. Astronomers believe the ring is likely made of hydrogen molecules that were ejected from the star in the form of atoms, then cooled to make hydrogen pairs. The new data will help explain how planetary nebulas take shape, and how they nourish future generations of stars.

    This image composite was taken on Dec. 6, 2003, by Spitzer's infrared array camera, and is composed of images obtained at four wavelengths: 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8 microns (red).

  9. Spitzer Makes 'Invisible' Visible

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Hidden behind a shroud of dust in the constellation Cygnus is a stellar nursery called DR21, which is giving birth to some of the most massive stars in our galaxy. Visible light images reveal no trace of this interstellar cauldron because of heavy dust obscuration. In fact, visible light is attenuated in DR21 by a factor of more than 10,000,000,000,000,000,000,000,000,000,000,000,000,000 (ten thousand trillion heptillion).

    New images from NASA's Spitzer Space Telescope allow us to peek behind the cosmic veil and pinpoint one of the most massive natal stars yet seen in our Milky Way galaxy. The never-before-seen star is 100,000 times as bright as the Sun. Also revealed for the first time is a powerful outflow of hot gas emanating from this star and bursting through a giant molecular cloud.

    The colorful image is a large-scale composite mosaic assembled from data collected at a variety of different wavelengths. Views at visible wavelengths appear blue, near-infrared light is depicted as green, and mid-infrared data from the InfraRed Array Camera (IRAC) aboard NASA's Spitzer Space Telescope is portrayed as red. The result is a contrast between structures seen in visible light (blue) and those observed in the infrared (yellow and red). A quick glance shows that most of the action in this image is revealed to the unique eyes of Spitzer. The image covers an area about two times that of a full moon.

  10. Dissection of a Galaxy

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sometimes, the best way to understand how something works is to take it apart. The same is true for galaxies like NGC 300, which NASA's Spitzer Space Telescope has divided into its various parts. NGC 300 is a face-on spiral galaxy located 7.5 million light-years away in the southern constellation Sculptor.

    This false-color image taken by the infrared array camera on Spitzer readily distinguishes the main star component of the galaxy (blue) from its dusty spiral arms (red). The star distribution peaks strongly in the central bulge where older stars congregate, and tapers off along the arms where younger stars reside.

    Thanks to Spitzer's unique ability to sense the heat or infrared emission from dust, astronomers can now clearly trace the embedded dust structures within NGC 300's arms. When viewed at visible wavelengths, the galaxy's dust appears as dark lanes, largely overwhelmed by bright starlight. With Spitzer, the dust - in particular organic compounds called polycyclic aromatic hydrocarbons - can be seen in vivid detail (red). These organic molecules are produced, along with heavy elements, by the stellar nurseries that pepper the arms.

    The findings provide a better understanding of spiral galaxy mechanics and, in the future, will help decipher more distant galaxies, whose individual components cannot be resolved.

    This image was taken on Nov. 21, 2003 and is composed of photographs obtained at four wavelengths: 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8 microns (red).

  11. Dissection of a Galaxy

    NASA Image and Video Library

    2004-05-11

    Sometimes, the best way to understand how something works is to take it apart. The same is true for galaxies like NGC 300, which NASA's Spitzer Space Telescope has divided into its various parts. NGC 300 is a face-on spiral galaxy located 7.5 million light-years away in the southern constellation Sculptor. This false-color image taken by the infrared array camera on Spitzer readily distinguishes the main star component of the galaxy (blue) from its dusty spiral arms (red). The star distribution peaks strongly in the central bulge where older stars congregate, and tapers off along the arms where younger stars reside. Thanks to Spitzer's unique ability to sense the heat or infrared emission from dust, astronomers can now clearly trace the embedded dust structures within NGC 300's arms. When viewed at visible wavelengths, the galaxy's dust appears as dark lanes, largely overwhelmed by bright starlight. With Spitzer, the dust - in particular organic compounds called polycyclic aromatic hydrocarbons - can be seen in vivid detail (red). These organic molecules are produced, along with heavy elements, by the stellar nurseries that pepper the arms. The findings provide a better understanding of spiral galaxy mechanics and, in the future, will help decipher more distant galaxies, whose individual components cannot be resolved. This image was taken on Nov. 21, 2003 and is composed of photographs obtained at four wavelengths: 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8 microns (red). http://photojournal.jpl.nasa.gov/catalog/PIA05879

  12. Retinal fundus imaging with a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos

    2018-02-01

    Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.

  13. Diagnostics for Z-pinch implosion experiments on PTS

    NASA Astrophysics Data System (ADS)

    Ren, X. D.; Huang, X. B.; Zhou, S. T.; Zhang, S. Q.; Dan, J. K.; Li, J.; Cai, H. C.; Wang, K. L.; Ouyang, K.; Xu, Q.; Duan, S. C.; Chen, G. H.; Wang, M.; Feng, S. P.; Yang, L. B.; Xie, W. P.; Deng, J. J.

    2014-12-01

    The preliminary experiments of wire array implosion were performed on PTS, a 10 MA z-pinch driver with a 70 ns rise time. A set of diagnostics have been developed and fielded on PTS to study pinch physics and implosion dynamics of wire array. Radiated power measurement for soft x-rays was performed by multichannel filtered x-ray diode array, and flat spectral responses x-ray diode detector. Total x-ray yield was measured by a calibrated, unfiltered nickel bolometer which was also used to obtain pinch power. Multiple time-gated pinhole cameras were used to produce spatial-resolved images of x-ray self-emission from plasmas. Two time-integrated pinhole cameras were used respectively with 20-μm Be filter and with multilayer mirrors to record images produced by >1-keV and 277±5 eV self-emission. An optical streak camera was used to produce radial implosion trajectories, and an x-ray streak camera paired with a horizontal slit was used to record a continuous time-history of emission with one-dimensional spatial resolution. A frequency-doubled Nd:YAG laser (532 nm) was used to produce four frame laser shadowgraph images with 6 ns time interval. We will briefly describe each of these diagnostics and present some typical results from them.

  14. Fabrication of large dual-polarized multichroic TES bolometer arrays for CMB measurements with the SPT-3G camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Posada, C. M.; Ade, P. A. R.; Ahmed, Z.

    2015-08-11

    This work presents the procedures used by Argonne National Laboratory to fabricate large arrays of multichroic transition-edge sensor (TES) bolometers for cosmic microwave background (CMB) measurements. These detectors will be assembled into the focal plane for the SPT-3G camera, the third generation CMB camera to be installed in the South Pole Telescope. The complete SPT-3G camera will have approximately 2690 pixels, for a total of 16,140 TES bolometric detectors. Each pixel is comprised of a broad-band sinuous antenna coupled to a Nb microstrip line. In-line filters are used to define the different band-passes before the millimeter-wavelength signal is fed tomore » the respective Ti/Au TES bolometers. There are six TES bolometer detectors per pixel, which allow for measurements of three band-passes (95 GHz, 150 GHz and 220 GHz) and two polarizations. The steps involved in the monolithic fabrication of these detector arrays are presented here in detail. Patterns are defined using a combination of stepper and contact lithography. The misalignment between layers is kept below 200 nm. The overall fabrication involves a total of 16 processes, including reactive and magnetron sputtering, reactive ion etching, inductively coupled plasma etching and chemical etching.« less

  15. Terahertz Real-Time Imaging Uncooled Arrays Based on Antenna-Coupled Bolometers or FET Developed at CEA-Leti

    NASA Astrophysics Data System (ADS)

    Simoens, François; Meilhan, Jérôme; Nicolas, Jean-Alain

    2015-10-01

    Sensitive and large-format terahertz focal plane arrays (FPAs) integrated in compact and hand-held cameras that deliver real-time terahertz (THz) imaging are required for many application fields, such as non-destructive testing (NDT), security, quality control of food, and agricultural products industry. Two technologies of uncooled THz arrays that are being studied at CEA-Leti, i.e., bolometer and complementary metal oxide semiconductor (CMOS) field effect transistors (FET), are able to meet these requirements. This paper reminds the followed technological approaches and focuses on the latest modeling and performance analysis. The capabilities of application of these arrays to NDT and security are then demonstrated with experimental tests. In particular, high technological maturity of the THz bolometer camera is illustrated with fast scanning of large field of view of opaque scenes achieved in a complete body scanner prototype.

  16. Imaging spectrometer/camera having convex grating

    NASA Technical Reports Server (NTRS)

    Reininger, Francis M. (Inventor)

    2000-01-01

    An imaging spectrometer has fore-optics coupled to a spectral resolving system with an entrance slit extending in a first direction at an imaging location of the fore-optics for receiving the image, a convex diffraction grating for separating the image into a plurality of spectra of predetermined wavelength ranges; a spectrometer array for detecting the spectra; and at least one concave sperical mirror concentric with the diffraction grating for relaying the image from the entrance slit to the diffraction grating and from the diffraction grating to the spectrometer array. In one embodiment, the spectrometer is configured in a lateral mode in which the entrance slit and the spectrometer array are displaced laterally on opposite sides of the diffraction grating in a second direction substantially perpendicular to the first direction. In another embodiment, the spectrometer is combined with a polychromatic imaging camera array disposed adjacent said entrance slit for recording said image.

  17. In-Space Structural Validation Plan for a Stretched-Lens Solar Array Flight Experiment

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Woods-Vedeler, Jessica A.; Jones, Thomas W.

    2001-01-01

    This paper summarizes in-space structural validation plans for a proposed Space Shuttle-based flight experiment. The test article is an innovative, lightweight solar array concept that uses pop-up, refractive stretched-lens concentrators to achieve a power/mass density of at least 175 W/kg, which is more than three times greater than current capabilities. The flight experiment will validate this new technology to retire the risk associated with its first use in space. The experiment includes structural diagnostic instrumentation to measure the deployment dynamics, static shape, and modes of vibration of the 8-meter-long solar array and several of its lenses. These data will be obtained by photogrammetry using the Shuttle payload-bay video cameras and miniature video cameras on the array. Six accelerometers are also included in the experiment to measure base excitations and small-amplitude tip motions.

  18. Study of plant phototropic responses to different LEDs illumination in microgravity

    NASA Astrophysics Data System (ADS)

    Zyablova, Natalya; Berkovich, Yuliy A.; Skripnikov, Alexander; Nikitin, Vladimir

    2012-07-01

    The purpose of the experiment planned for Russian BION-M #1, 2012, biosatellite is research of Physcomitrella patens (Hedw.) B.S.G. phototropic responses to different light stimuli in microgravity. The moss was chosen as small-size higher plant. The experimental design involves five lightproof culture flasks with moss gametophores fixed inside the cylindrical container (diameter 120 mm; height 240 mm). The plants in each flask are illuminated laterally by one of the following LEDs: white, blue (475 nm), red (625 nm), far red (730 nm), infrared (950 nm). The gametophores growth and bending are captured periodically by means of five analogue video cameras and recorder. The programmable command module controls power supply of each camera and each light source, commutation of the cameras and functioning of video recorder. Every 20 minutes the recorder is sequentially connecting to one of the cameras. This results in a clip, containing 5 sets of frames in a row. After landing time-lapse films are automatically created. As a result we will have five time-lapse films covering transformations in each of the five culture flasks. Onground experiments demonstrated that white light induced stronger gametophores phototropic bending as compared to red and blue stimuli. The comparison of time-lapse recordings in the experiments will provide useful information to optimize lighting assemblies for space plant growth facilities.

  19. Field trials for determining the visible and infrared transmittance of screening smoke

    NASA Astrophysics Data System (ADS)

    Sánchez Oliveros, Carmen; Santa-María Sánchez, Guillermo; Rosique Pérez, Carlos

    2009-09-01

    In order to evaluate the concealment capability of smoke, the Countermeasures Laboratory of the Institute of Technology "Marañosa" (ITM) has done a set of tests for measuring the transmittances of multispectral smoke tins in several bands of the electromagnetic spectrum. The smoke composition based on red phosphorous has been developed and patented by this laboratory as a part of a projectile development. The smoke transmittance was measured by means of thermography as well as spectroradiometry. Black bodies and halogen lamps were used as infrared and visible source of radiation. The measurements were carried out in June of 2008 at the Marañosa field (Spain) with two MWIR cameras, two LWIR cameras, one CCD visible camera, one CVF IR spectroradiometer covering the interval 1.5 to 14 microns and one array silicon based spectroradiometer for the 0.2 to 1.1 μm spectra. The transmittance and dimensions of the smoke screen were characterized in the visible band, MWIR (3 - 5 μm and LWIR (8 - 12 μm) regions. The size of the screen was about 30 meters wide and 5 meters high. The transmittances in the IR bands were about 0.3 and better than 0.1 in the visible one. The screens showed to be effective over the time of persistence for all of the tests. The results obtained from the imaging and non-imaging systems were in good accordance. The meteorological conditions during tests such as the wind speed are determinant for the use of this kind of optical countermeasures.

  20. Comparison of parameters of modern cooled and uncooled thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  1. KSC-01pp1730

    NASA Image and Video Library

    2001-11-27

    KENNEDY SPACE CENTER, Fla. -- In the Vertical Processing Facility, members of the STS-109 crew look over the Solar Array 3 panels that will be replacing Solar Array 2 panels on the Hubble Space Telescope (HST). Trainers, at left, point to the panels while Mission Specialist Nancy Currie (second from right) and Commander Scott Altman (far right) look on. Other crew members are Pilot Duane Carey, Payload Commander John Grunsfeld and Mission Specialists James Newman, Richard Linnehan and Michael Massimino. The other goals of the mission are replacing the Power Control Unit, removing the Faint Object Camera and installing the Advanced Camera for Surveys, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  2. An assessment of the utility of a non-metric digital camera for measuring standing trees

    Treesearch

    Neil Clark; Randolph H. Wynne; Daniel L. Schmoldt; Matthew F. Winn

    2000-01-01

    Images acquired with a commercially available digital camera were used to make measurements on 20 red oak (Quercus spp.) stems. The ranges of diameter at breast height (DBH) and height to a 10 cm upper-stem diameter were 16-66 cm and 12-20 m, respectively. Camera stations located 3, 6, 9, 12, and 15 m from the stem were studied to determine the best distance to be...

  3. Development of a 1K x 1K GaAs QWIP Far IR Imaging Array

    NASA Technical Reports Server (NTRS)

    Jhabvala, M.; Choi, K.; Goldberg, A.; La, A.; Gunapala, S.

    2003-01-01

    In the on-going evolution of GaAs Quantum Well Infrared Photodetectors (QWIPs) we have developed a 1,024 x 1,024 (1K x1K), 8.4-9 microns infrared focal plane array (FPA). This 1 megapixel detector array is a hybrid using the Rockwell TCM 8050 silicon readout integrated circuit (ROIC) bump bonded to a GaAs QWIP array fabricated jointly by engineers at the Goddard Space Flight Center (GSFC) and the Army Research Laboratory (ARL). The finished hybrid is thinned at the Jet Propulsion Lab. Prior to this development the largest format array was a 512 x 640 FPA. We have integrated the 1K x 1K array into an imaging camera system and performed tests over the 40K-90K temperature range achieving BLIP performance at an operating temperature of 76K (f/2 camera system). The GaAs array is relatively easy to fabricate once the superlattice structure of the quantum wells has been defined and grown. The overall arrays costs are currently dominated by the costs associated with the silicon readout since the GaAs array fabrication is based on high yield, well-established GaAs processing capabilities. In this paper we will present the first results of our 1K x 1K QWIP array development including fabrication methodology, test data and our imaging results.

  4. Life at the Intersection of Colliding Galaxies

    NASA Image and Video Library

    2004-09-07

    This false-color image from NASA's Spitzer Space Telescope reveals hidden populations of newborn stars at the heart of the colliding "Antennae" galaxies. These two galaxies, known individually as NGC 4038 and 4039, are located around 68 million light-years away and have been merging together for about the last 800 million years. The latest Spitzer observations provide a snapshot of the tremendous burst of star formation triggered in the process of this collision, particularly at the site where the two galaxies overlap. The image was taken by Spitzer's infrared array camera and is a combination of infrared light ranging from 3.6 microns (shown in blue) to 8.0 microns (shown in red). The dust emission (red) is by far the strongest feature in this image. Starlight was systematically subtracted from the longer wavelength data (red) to enhance dust features. The two nuclei, or centers, of the merging galaxies show up as white areas, one above the other. The brightest clouds of forming stars lie in the overlap region between and left of the nuclei. Throughout the sky, astronomers have identified many of these so-called "interacting" galaxies, whose spiral discs have been stretched and distorted by their mutual gravity as they pass close to one another. The distances involved are so large that the interactions evolve on timescales comparable to geologic changes on Earth. Observations of such galaxies, combined with computer models of these collisions, show that the galaxies often become forever bound to one another, eventually merging into a single, spheroidal-shaped galaxy. Wavelengths of 3.6 microns are represented in blue, 4.5 microns in green and 5.8-8.0 microns in red. This image was taken on Dec. 24, 2003. http://photojournal.jpl.nasa.gov/catalog/PIA06853

  5. A high-speed trapezoid image sensor design for continuous traffic monitoring at signalized intersection approaches.

    DOT National Transportation Integrated Search

    2014-10-01

    The goal of this project is to monitor traffic flow continuously with an innovative camera system composed of a custom : designed image sensor integrated circuit (IC) containing trapezoid pixel array and camera system that is capable of : intelligent...

  6. Evaluating the effectiveness of red light running camera enforcement in Cedar Rapids and developing guidelines for selection and use of red light running countermeasures.

    DOT National Transportation Integrated Search

    2011-11-01

    Red light running (RLR) is a problem in the US that has resulted in 165,000 injuries and 907 fatalities annually. In Iowa, RLR-related crashes make up 24.5 percent of all crashes and account for 31.7 percent of fatal and major injury crashes at signa...

  7. Light field geometry of a Standard Plenoptic Camera.

    PubMed

    Hahne, Christopher; Aggoun, Amar; Haxha, Shyqyri; Velisavljevic, Vladan; Fernández, Juan Carlos Jácome

    2014-11-03

    The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.

  8. Advanced imaging research and development at DARPA

    NASA Astrophysics Data System (ADS)

    Dhar, Nibir K.; Dat, Ravi

    2012-06-01

    Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.

  9. KSC01pd1736

    NASA Image and Video Library

    2001-11-26

    KENNEDY SPACE CENTER, Fla. -- A piece of equipment for Hubble Space Telescope Servicing mission is moved inside Hangar AE, Cape Canaveral. In the canister is the Advanced Camera for Surveys (ACS). The ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. The goal of the mission, STS-109, is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  10. KSC-01pp1758

    NASA Image and Video Library

    2001-11-29

    KENNEDY SPACE CENTER, Fla. -- In Hangar A&E, workers watch as an overhead crane lifts the Advanced Camera for Surveys out of its transportation container. Part of the payload on the Hubble Space Telescope Servicing Mission, STS-109, the ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. Tasks for the mission include replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  11. KSC01pd1735

    NASA Image and Video Library

    2001-11-26

    KENNEDY SPACE CENTER, Fla. - A piece of equipment for Hubble Space Telescope Servicing mission arrives at Hangar AE, Cape Canaveral. Inside the canister is the Advanced Camera for Surveys (ACS). The ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. The goal of the mission, STS-109, is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  12. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  13. The range of the mange: Spatiotemporal patterns of sarcoptic mange in red foxes (Vulpes vulpes) as revealed by camera trapping

    PubMed Central

    Odden, Morten; Linnell, John D. C.; Odden, John

    2017-01-01

    Sarcoptic mange is a widely distributed disease that affects numerous mammalian species. We used camera traps to investigate the apparent prevalence and spatiotemporal dynamics of sarcoptic mange in a red fox population in southeastern Norway. We monitored red foxes for five years using 305 camera traps distributed across an 18000 km2 area. A total of 6581 fox events were examined to visually identify mange compatible lesions. We investigated factors associated with the occurrence of mange by using logistic models within a Bayesian framework, whereas the spatiotemporal dynamics of the disease were analysed with space-time scan statistics. The apparent prevalence of the disease fluctuated over the study period with a mean of 3.15% and credible interval [1.25, 6.37], and our best logistic model explaining the presence of red foxes with mange-compatible lesions included time since the beginning of the study and the interaction between distance to settlement and season as explanatory variables. The scan analyses detected several potential clusters of the disease that varied in persistence and size, and the locations in the cluster with the highest probability were closer to human settlements than the other survey locations. Our results indicate that red foxes in an advanced stage of the disease are most likely found closer to human settlements during periods of low wild prey availability (winter). We discuss different potential causes. Furthermore, the disease appears to follow a pattern of small localized outbreaks rather than sporadic isolated events. PMID:28423011

  14. The electronics system for the LBNL positron emission mammography (PEM) camera

    NASA Astrophysics Data System (ADS)

    Moses, W. W.; Young, J. W.; Baker, K.; Jones, W.; Lenox, M.; Ho, M. H.; Weng, M.

    2001-06-01

    Describes the electronics for a high-performance positron emission mammography (PEM) camera. It is based on the electronics for a human brain positron emission tomography (PET) camera (the Siemens/CTI HRRT), modified to use a detector module that incorporates a photodiode (PD) array. An application-specified integrated circuit (ASIC) services the photodetector (PD) array, amplifying its signal and identifying the crystal of interaction. Another ASIC services the photomultiplier tube (PMT), measuring its output and providing a timing signal. Field-programmable gate arrays (FPGAs) and lookup RAMs are used to apply crystal-by-crystal correction factors and measure the energy deposit and the interaction depth (based on the PD/PMT ratio). Additional FPGAs provide event multiplexing, derandomization, coincidence detection, and real-time rebinning. Embedded PC/104 microprocessors provide communication, real-time control, and configure the system. Extensive use of FPGAs make the overall design extremely flexible, allowing many different functions (or design modifications) to be realized without hardware changes. Incorporation of extensive onboard diagnostics, implemented in the FPGAs, is required by the very high level of integration and density achieved by this system.

  15. Botswana: Ntwetwe and Sua Pans

    Atmospheric Science Data Center

    2013-04-15

    ... of red band imagery in which the 45-degree aft camera data are displayed in blue, 45-degree forward as green, and vertical as red. ... coat the surface and turn it bright ("sua" means salt). The mining town of Sowa is located where the Sua Spit (a finger of grassland ...

  16. CMOS Camera Array With Onboard Memory

    NASA Technical Reports Server (NTRS)

    Gat, Nahum

    2009-01-01

    A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.

  17. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  18. Young Stars Emerge from Orion Head

    NASA Image and Video Library

    2007-05-17

    This image from NASA's Spitzer Space Telescope shows infant stars "hatching" in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth. The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's "head," just north of the massive star Lambda Orionis. Wisps of green in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked. Tints of orange-red in the cloud are dust particles warmed by the newly forming stars. The reddish-pink dots at the top of the cloud are very young stars embedded in a cocoon of cosmic gas and dust. Blue spots throughout the image are background Milky Way along this line of sight. This composite includes data from Spitzer's infrared array camera instrument, and multiband imaging photometer instrument. Light at 4.5 microns is shown as blue, 8.0 microns is green, and 24 microns is red. http://photojournal.jpl.nasa.gov/catalog/PIA09411

  19. Young Stars Emerge from Orion's Head

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This image from NASA's Spitzer Space Telescope shows infant stars 'hatching' in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth

    The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's 'head,' just north of the massive star Lambda Orionis.

    Wisps of green in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked.

    Tints of orange-red in the cloud are dust particles warmed by the newly forming stars. The reddish-pink dots at the top of the cloud are very young stars embedded in a cocoon of cosmic gas and dust. Blue spots throughout the image are background Milky Way along this line of sight.

    This composite includes data from Spitzer's infrared array camera instrument, and multiband imaging photometer instrument. Light at 4.5 microns is shown as blue, 8.0 microns is green, and 24 microns is red.

  20. Why Are Galaxies So Smooth?

    NASA Image and Video Library

    2009-04-30

    This image from NASA's Spitzer Space Telescope shows the spiral galaxy NGC 2841, located about 46 million light-years from Earth in the constellation Ursa Major. The galaxy is helping astronomers solve one of the oldest puzzles in astronomy: Why do galaxies look so smooth, with stars sprinkled evenly throughout? An international team of astronomers has discovered that rivers of young stars flow from their hot, dense stellar nurseries, dispersing out to form large, smooth distributions. This image is a composite of three different wavelengths from Spitzer's infrared array camera. The shortest wavelengths are displayed inblue, and mostly show the older stars in NGC 2841, as well as foreground stars in our own Milky Way galaxy. The cooler areas are highlighted in red, and show the dusty, gaseous regions of the galaxy. Blue shows infrared light of 3.6 microns, green represents 4.5-micron light and red, 8.0-micron light. The contribution from starlight measured at 3.6 microns has been subtracted from the 8.0-micron data to enhance the visibility of the dust features.The shortest wavelengths are displayed inblue, and mostly show the older stars in NGC 2841, as well as foreground stars in our own Milky Way galaxy. http://photojournal.jpl.nasa.gov/catalog/PIA12001

  1. High-speed line-scan camera with digital time delay integration

    NASA Astrophysics Data System (ADS)

    Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.

  2. Pyroelectric IR sensor arrays for fall detection in the older population

    NASA Astrophysics Data System (ADS)

    Sixsmith, A.; Johnson, N.; Whatmore, R.

    2005-09-01

    Uncooled pyroelectric sensor arrays have been studied over many years for their uses in thermal imaging applications. These arrays will only detect changes in IR flux and so systems based upon them are very good at detecting movements of people in the scene without sensing the background, if they are used in staring mode. Relatively-low element count arrays (16 x 16) can be used for a variety of people sensing applications, including people counting (for safety applications), queue monitoring etc. With appropriate signal processing such systems can be also be used for the detection of particular events such as a person falling over. There is a considerable need for automatic fall detection amongst older people, but there are important limitations to some of the current and emerging technologies available for this. Simple sensors, such as 1 or 2 element pyroelectric infra-red sensors provide crude data that is difficult to interpret; the use of devices worn on the person, such as wrist communicator and motion detectors have potential, but are reliant on the person being able and willing to wear the device; video cameras may be seen as intrusive and require considerable human resources to monitor activity while machine-interpretation of camera images is complex and may be difficult in this application area. The use of a pyroelectric thermal array sensor was seen to have a number of potential benefits. The sensor is wall-mounted and does not require the user to wear a device. It enables detailed analysis of a subject's motion to be achieved locally, within the detector, using only a modest processor. This is possible due to the relative ease with which data from the sensor can be interpreted relative to the data generated by alternative sensors such as video devices. In addition to the cost-effectiveness of this solution, it was felt that the lack of detail in the low-level data, together with the elimination of the need to transmit data outside the detector, would help to avert feelings intrusiveness on the part of the end-user.The main benefits of this type of technology would be for older people who spend time alone in unsupervised environments. This would include people living alone in ordinary housing or in sheltered accommodation (apartment complexes for older people with local warden) and non-communal areas in residential/nursing home environments (e.g. bedrooms and ensuite bathrooms and toilets). This paper will review the development of the array, the pyroelectric ceramic material upon which it is based and the system capabilities. It will present results from the Framework 5 SIMBAD project, which used the system to monitor the movements of elderly people over a considerable period of time.

  3. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  4. Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean

    PubMed Central

    Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave

    2009-01-01

    Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729

  5. CLOSE-UP LOOK AT A JET NEAR A BLACK HOLE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [top left] - This radio image of the galaxy M87, taken with the Very Large Array (VLA) radio telescope in February 1989, shows giant bubble-like structures where radio emission is thought to be powered by the jets of subatomic particles coming from the the galaxy's central black hole. The false color corresponds to the intensity of the radio energy being emitted by the jet. M87 is located 50 million light-years away in the constellation Virgo. Credit: National Radio Astronomy Observatory/National Science Foundation [top right] - A visible light image of the giant elliptical galaxy M87, taken with NASA Hubble Space Telescope's Wide Field Planetary Camera 2 in February 1998, reveals a brilliant jet of high-speed electrons emitted from the nucleus (diagonal line across image). The jet is produced by a 3-billion-solar-mass black hole. Credit: NASA and John Biretta (STScI/JHU) [bottom] - A Very Long Baseline Array (VLBA) radio image of the region close to the black hole, where an extragalactic jet is formed into a narrow beam by magnetic fields. The false color corresponds to the intensity of the radio energy being emitted by the jet. The red region is about 1/10 light-year across. The image was taken in March 1999. Credit: National Radio Astronomy Observatory/Associated Universities, Inc.

  6. Focusing of high intensity ultrasound through the rib cage using a therapeutic random phased array

    PubMed Central

    Bobkova, Svetlana; Gavrilov, Leonid; Khokhlova, Vera; Shaw, Adam; Hand, Jeffrey; #, ||

    2010-01-01

    A method for focusing high intensity ultrasound through a rib cage that aims to minimize heating of the ribs whilst maintaining high intensities at the focus (or foci) is proposed and tested theoretically and experimentally. Two approaches, one based on geometric acoustics and the other accounting for diffraction effects associated with propagation through the rib cage, are investigated theoretically for idealized source conditions. It is shown that for an idealized radiator the diffraction approach provides a 23% gain in peak intensity and results in significantly less power losses on the ribs (1% versus 7.5% of the irradiated power) compared with the geometric one. A 2D 1-MHz phased array with 254 randomly distributed elements, tissue mimicking phantoms, and samples of porcine rib cages are used in experiments; the geometric approach is used to configure how the array is driven. Intensity distributions are measured in the plane of the ribs and in the focal plane using an infra-red camera. Theoretical and experimental results show that it is possible to provide adequate focusing through the ribs without overheating them for a single focus and several foci, including steering at ± 10–15 mm off and ± 20 mm along the array axis. Focus splitting due to the periodic spatial structure of ribs is demonstrated both in simulations and experiments; the parameters of splitting are quantified. The ability to produce thermal lesions with a split focal pattern in ex vivo porcine tissue placed beyond the rib phantom is also demonstrated. The results suggest that the method is potentially useful for clinical applications of HIFU for which the rib cage lies between the transducer(s) and the targeted tissue. PMID:20510186

  7. Space infrared telescope facility wide field and diffraction limited array camera (IRAC)

    NASA Technical Reports Server (NTRS)

    Fazio, G. G.

    1986-01-01

    IRAC focal plane detector technology was developed and studies of alternate focal plane configurations were supported. While any of the alternate focal planes under consideration would have a major impact on the Infrared Array Camera, it was possible to proceed with detector development and optical analysis research based on the proposed design since, to a large degree, the studies undertaken are generic to any SIRTF imaging instrument. Development of the proposed instrument was also important in a situation in which none of the alternate configurations has received the approval of the Science Working Group.

  8. Lunar UV-visible-IR mapping interferometric spectrometer

    NASA Technical Reports Server (NTRS)

    Smith, W. Hayden; Haskin, L.; Korotev, R.; Arvidson, R.; Mckinnon, W.; Hapke, B.; Larson, S.; Lucey, P.

    1992-01-01

    Ultraviolet-visible-infrared mapping digital array scanned interferometers for lunar compositional surveys was developed. The research has defined a no-moving-parts, low-weight and low-power, high-throughput, and electronically adaptable digital array scanned interferometer that achieves measurement objectives encompassing and improving upon all the requirements defined by the LEXSWIG for lunar mineralogical investigation. In addition, LUMIS provides a new, important, ultraviolet spectral mapping, high-spatial-resolution line scan camera, and multispectral camera capabilities. An instrument configuration optimized for spectral mapping and imaging of the lunar surface and provide spectral results in support of the instrument design are described.

  9. Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array.

    PubMed

    Navruz, Isa; Coskun, Ahmet F; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan

    2013-10-21

    We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ~9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ~3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also removes spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears.

  10. Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array

    PubMed Central

    Navruz, Isa; Coskun, Ahmet F.; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan

    2013-01-01

    We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ∼9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ∼3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also gets rid of spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears. PMID:23939637

  11. Occupancy models for monitoring marine fish: a bayesian hierarchical approach to model imperfect detection with a novel gear combination.

    PubMed

    Coggins, Lewis G; Bacheler, Nathan M; Gwinn, Daniel C

    2014-01-01

    Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics.

  12. Occupancy Models for Monitoring Marine Fish: A Bayesian Hierarchical Approach to Model Imperfect Detection with a Novel Gear Combination

    PubMed Central

    Coggins, Lewis G.; Bacheler, Nathan M.; Gwinn, Daniel C.

    2014-01-01

    Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics. PMID:25255325

  13. Red to far-red multispectral fluorescence image fusion for detection of fecal contamination on apples

    USDA-ARS?s Scientific Manuscript database

    This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...

  14. Concept of electro-optical sensor module for sniper detection system

    NASA Astrophysics Data System (ADS)

    Trzaskawka, Piotr; Dulski, Rafal; Kastek, Mariusz

    2010-10-01

    The paper presents an initial concept of the electro-optical sensor unit for sniper detection purposes. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. Being a part of a larger system it should contribute to greater overall system efficiency and lower false alarm rate thanks to data and sensor fusion techniques. Additionally, it is expected to provide some pre-shot detection capabilities. Generally acoustic (or radar) systems used for shot detection offer only "after-the-shot" information and they cannot prevent enemy attack, which in case of a skilled sniper opponent usually means trouble. The passive imaging sensors presented in this paper, together with active systems detecting pointed optics, are capable of detecting specific shooter signatures or at least the presence of suspected objects in the vicinity. The proposed sensor unit use thermal camera as a primary sniper and shot detection tool. The basic camera parameters such as focal plane array size and type, focal length and aperture were chosen on the basis of assumed tactical characteristics of the system (mainly detection range) and current technology level. In order to provide costeffective solution the commercially available daylight camera modules and infrared focal plane arrays were tested, including fast cooled infrared array modules capable of 1000 fps image acquisition rate. The daylight camera operates as a support, providing corresponding visual image, easier to comprehend for a human operator. The initial assumptions concerning sensor operation were verified during laboratory and field test and some example shot recording sequences are presented.

  15. Camera Development for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Moncada, Roberto Jose

    2017-01-01

    With the Cherenkov Telescope Array (CTA), the very-high-energy gamma-ray universe, between 30 GeV and 300 TeV, will be probed at an unprecedented resolution, allowing deeper studies of known gamma-ray emitters and the possible discovery of new ones. This exciting project could also confirm the particle nature of dark matter by looking for the gamma rays produced by self-annihilating weakly interacting massive particles (WIMPs). The telescopes will use the imaging atmospheric Cherenkov technique (IACT) to record Cherenkov photons that are produced by the gamma-ray induced extensive air shower. One telescope design features dual-mirror Schwarzschild-Couder (SC) optics that allows the light to be finely focused on the high-resolution silicon photomultipliers of the camera modules starting from a 9.5-meter primary mirror. Each camera module will consist of a focal plane module and front-end electronics, and will have four TeV Array Readout with GSa/s Sampling and Event Trigger (TARGET) chips, giving them 64 parallel input channels. The TARGET chip has a self-trigger functionality for readout that can be used in higher logic across camera modules as well as across individual telescopes, which will each have 177 camera modules. There will be two sites, one in the northern and the other in the southern hemisphere, for full sky coverage, each spanning at least one square kilometer. A prototype SC telescope is currently under construction at the Fred Lawrence Whipple Observatory in Arizona. This work was supported by the National Science Foundation's REU program through NSF award AST-1560016.

  16. Quantitative analysis of the improvement in omnidirectional maritime surveillance and tracking due to real-time image enhancement

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.

    2011-05-01

    Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.

  17. Spatial capture–recapture with partial identity: An application to camera traps

    USGS Publications Warehouse

    Augustine, Ben C.; Royle, J. Andrew; Kelly, Marcella J.; Satter, Christopher B.; Alonso, Robert S.; Boydston, Erin E.; Crooks, Kevin R.

    2018-01-01

    Camera trapping surveys frequently capture individuals whose identity is only known from a single flank. The most widely used methods for incorporating these partial identity individuals into density analyses discard some of the partial identity capture histories, reducing precision, and, while not previously recognized, introducing bias. Here, we present the spatial partial identity model (SPIM), which uses the spatial location where partial identity samples are captured to probabilistically resolve their complete identities, allowing all partial identity samples to be used in the analysis. We show that the SPIM outperforms other analytical alternatives. We then apply the SPIM to an ocelot data set collected on a trapping array with double-camera stations and a bobcat data set collected on a trapping array with single-camera stations. The SPIM improves inference in both cases and, in the ocelot example, individual sex is determined from photographs used to further resolve partial identities—one of which is resolved to near certainty. The SPIM opens the door for the investigation of trapping designs that deviate from the standard two camera design, the combination of other data types between which identities cannot be deterministically linked, and can be extended to the problem of partial genotypes.

  18. Design of a Day/Night Star Camera System

    NASA Technical Reports Server (NTRS)

    Alexander, Cheryl; Swift, Wesley; Ghosh, Kajal; Ramsey, Brian

    1999-01-01

    This paper describes the design of a camera system capable of acquiring stars during both the day and night cycles of a high altitude balloon flight (35-42 km). The camera system will be filtered to operate in the R band (590-810 nm). Simulations have been run using MODTRAN atmospheric code to determine the worse case sky brightness at 35 km. With a daytime sky brightness of 2(exp -05) W/sq cm/str/um in the R band, the sensitivity of the camera system will allow acquisition of at least 1-2 stars/sq degree at star magnitude limits of 8.25-9.00. The system will have an F2.8, 64.3 mm diameter lens and a 1340X1037 CCD array digitized to 12 bits. The CCD array is comprised of 6.8 X 6.8 micron pixels with a well depth of 45,000 electrons and a quantum efficiency of 0.525 at 700 nm. The camera's field of view will be 6.33 sq degree and provide attitude knowledge to 8 arcsec or better. A test flight of the system is scheduled for fall 1999.

  19. The upgrade of the H.E.S.S. cameras

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-01-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes (IACT) located in Namibia. In order to assure the continuous operation of H.E.S.S. at its full sensitivity until and possibly beyond the advent of CTA, the older cameras, installed in 2003, are currently undergoing an extensive upgrade. Its goals are reducing the system failure rate, reducing the dead time and improving the overall performance of the array. All camera components have been upgraded, except the mechanical structure and the photo-multiplier tubes (PMTs). Novel technical solutions have been introduced: the upgraded readout electronics is based on the NECTAr analog memory chip; the control of the hardware is carried out by an FPGA coupled to an embedded ARM computer; the control software was re-written from scratch and it is based on modern C++ open source libraries. These hardware and software solutions offer very good performance, robustness and flexibility. The first camera was fielded in July 2015 and has been successfully commissioned; the rest is scheduled to be upgraded in September 2016. The present contribution describes the design, the testing and the performance of the new H.E.S.S. camera and its components.

  20. In vitro near-infrared imaging of occlusal dental caries using a germanium-enhanced CMOS camera

    NASA Astrophysics Data System (ADS)

    Lee, Chulsung; Darling, Cynthia L.; Fried, Daniel

    2010-02-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  1. In vitro near-infrared imaging of occlusal dental caries using germanium enhanced CMOS camera.

    PubMed

    Lee, Chulsung; Darling, Cynthia L; Fried, Daniel

    2010-03-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  2. A drone detection with aircraft classification based on a camera array

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Qu, Fangchao; Liu, Yingjian; Zhao, Wei; Chen, Yitong

    2018-03-01

    In recent years, because of the rapid popularity of drones, many people have begun to operate drones, bringing a range of security issues to sensitive areas such as airports and military locus. It is one of the important ways to solve these problems by realizing fine-grained classification and providing the fast and accurate detection of different models of drone. The main challenges of fine-grained classification are that: (1) there are various types of drones, and the models are more complex and diverse. (2) the recognition test is fast and accurate, in addition, the existing methods are not efficient. In this paper, we propose a fine-grained drone detection system based on the high resolution camera array. The system can quickly and accurately recognize the detection of fine grained drone based on hd camera.

  3. Imaging spectroscopy using embedded diffractive optical arrays

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford

    2017-09-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera based on diffractive optic arrays. This approach to hyperspectral imaging has been demonstrated in all three infrared bands SWIR, MWIR and LWIR. The hyperspectral optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of this infrared hyperspectral sensor. This new and innovative approach to an infrared hyperspectral imaging spectrometer uses micro-optics that are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a small satellite, mini-UAV, commercial quadcopter or man portable. Also, an application of how this spectral imaging technology can easily be used to quantify the mass and volume flow rates of hydrocarbon gases. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. The detector array is divided into sub-images covered by each lenslet. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the number of simultaneous different spectral images collected each frame of the camera. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame. This system spans the SWIR and MWIR bands with a single optical array and focal plane array.

  4. Views of the starboard P6 Truss solar array during STS-97

    NASA Image and Video Library

    2000-12-05

    STS097-702-070 (3 December 2000) --- An astronaut inside Endeavour's crew cabin used a handheld 70mm camera to expose this frame of the International Space Station's starboard solar array wing panel, backdropped against an Earth horizon scene.

  5. Manifold-Based Image Understanding

    DTIC Science & Technology

    2010-06-30

    3] employs a Texas Instruments digital micromirror device (DMD), which consists of an array of N electrostatically actuated micromirrors . The camera...image x) is reflected off a digital micromirror device (DMD) array whose mirror orientations are modulated in the pseudorandom pattern φm supplied by a

  6. LED characterization for development of on-board calibration unit of CCD-based advanced wide-field sensor camera of Resourcesat-2A

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Verma, Anurag

    2016-05-01

    The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.

  7. An array of virtual Frisch-grid CdZnTe detectors and a front-end application-specific integrated circuit for large-area position-sensitive gamma-ray cameras.

    PubMed

    Bolotnikov, A E; Ackley, K; Camarda, G S; Cherches, C; Cui, Y; De Geronimo, G; Fried, J; Hodges, D; Hossain, A; Lee, W; Mahler, G; Maritato, M; Petryk, M; Roy, U; Salwen, C; Vernon, E; Yang, G; James, R B

    2015-07-01

    We developed a robust and low-cost array of virtual Frisch-grid CdZnTe detectors coupled to a front-end readout application-specific integrated circuit (ASIC) for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6 × 6 × 15 mm(3) detectors grouped into 3 × 3 sub-arrays of 2 × 2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We present the results from testing two fully assembled modules and readout electronics. The further enhancement of the arrays' performance and reduction of their cost are possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.

  8. Helmet-mounted uncooled FPA camera for use in firefighting applications

    NASA Astrophysics Data System (ADS)

    Wu, Cheng; Feng, Shengrong; Li, Kai; Pan, Shunchen; Su, Junhong; Jin, Weiqi

    2000-05-01

    From the concept and need background of firefighters to the thermal imager, we discuss how the helmet-mounted camera applied in the bad environment of conflagration, especially at the high temperature, and how the better matching between the thermal imager with the helmet will be put into effect in weight, size, etc. Finally, give a practical helmet- mounted IR camera based on the uncooled focal plane array detector for in firefighting.

  9. Depth estimation using a lightfield camera

    NASA Astrophysics Data System (ADS)

    Roper, Carissa

    The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.

  10. 3D digital image correlation using a single 3CCD colour camera and dichroic filter

    NASA Astrophysics Data System (ADS)

    Zhong, F. Q.; Shao, X. X.; Quan, C.

    2018-04-01

    In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.

  11. The Little Red Spot: Closest View Yet

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This is a mosaic of three New Horizons images of Jupiter's Little Red Spot, taken with the spacecraft's Long Range Reconnaissance Imager (LORRI) camera at 17:41 Universal Time on February 26 from a range of 3.5 million kilometers (2.1 million miles). The image scale is 17 kilometers (11 miles) per pixel, and the area covered measures 33,000 kilometers (20,000 miles) from top to bottom, two and one-half times the diameter of Earth.

    The Little Red Spot, a smaller cousin of the famous Great Red Spot, formed in the past decade from the merger of three smaller Jovian storms, and is now the second-largest storm on Jupiter. About a year ago its color, formerly white, changed to a reddish shade similar to the Great Red Spot, perhaps because it is now powerful enough to dredge up reddish material from deeper inside Jupiter. These are the most detailed images ever taken of the Little Red Spot since its formation, and will be combined with even sharper images taken by New Horizons 10 hours later to map circulation patterns around and within the storm.

    LORRI took the images as the Sun was about to set on the Little Red Spot. The LORRI camera was designed to look at Pluto, where sunlight is much fainter than it is at Jupiter, so the images would have been overexposed if LORRI had looked at the storm when it was illuminated by the noonday Sun. The dim evening illumination helped the LORRI camera obtain well-exposed images. The New Horizons team used predictions made by amateur astronomers in 2006, based on their observations of the motion of the Little Red Spot with backyard telescopes, to help them accurately point LORRI at the storm.

    These are among a handful of Jupiter system images already returned by New Horizons during its close approach to Jupiter. Most of the data being gathered by the spacecraft are stored onboard and will be downlinked to Earth during March and April 2007.

  12. The Sensor Irony: How Reliance on Sensor Technology is Limiting Our View of the Battlefield

    DTIC Science & Technology

    2010-05-10

    thermal ) camera, as well as a laser illuminator/range finder.73 Similar to the MQ- 1 , the MQ-9 Reaper is primarily a strike asset for emerging targets...Wescam 14TS. 1 Both systems have an Electro-optical (daylight) TV camera, an Infra-red ( thermal ) camera, as well as a laser illuminator/range finder...Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour

  13. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  14. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  15. Low-cost low-power uncooled a-Si-based micro infrared camera for unattended ground sensor applications

    NASA Astrophysics Data System (ADS)

    Schimert, Thomas R.; Ratcliff, David D.; Brady, John F., III; Ropson, Steven J.; Gooch, Roland W.; Ritchey, Bobbi; McCardel, P.; Rachels, K.; Wand, Marty; Weinstein, M.; Wynn, John

    1999-07-01

    Low power and low cost are primary requirements for an imaging infrared camera used in unattended ground sensor arrays. In this paper, an amorphous silicon (a-Si) microbolometer-based uncooled infrared camera technology offering a low cost, low power solution to infrared surveillance for UGS applications is presented. A 15 X 31 micro infrared camera (MIRC) has been demonstrated which exhibits an f/1 noise equivalent temperature difference sensitivity approximately 67 mK. This sensitivity has been achieved without the use of a thermoelectric cooler for array temperature stabilization thereby significantly reducing the power requirements. The chopperless camera is capable of operating from snapshot mode (1 Hz) to video frame rate (30 Hz). Power consumption of 0.4 W without display, and 0.75 W with display, respectively, has been demonstrated at 30 Hz operation. The 15 X 31 camera demonstrated exhibits a 35 mm camera form factor employing a low cost f/1 singlet optic and LED display, as well as low cost vacuum packaging. A larger 120 X 160 version of the MIRC is also in development and will be discussed. The 120 X 160 MIRC exhibits a substantially smaller form factor and incorporates all the low cost, low power features demonstrated in the 15 X 31 MIRC prototype. In this paper, a-Si microbolometer technology for the MIRC will be presented. Also, the key features and performance parameters of the MIRC are presented.

  16. Coherent infrared imaging camera (CIRIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less

  17. Low SWaP multispectral sensors using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Dougherty, John; Varghese, Ron

    2015-06-01

    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.

  18. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    PubMed

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  19. VizieR Online Data Catalog: Follow-up photometry of M101 OT2015-1 (Blagorodnova+, 2017)

    NASA Astrophysics Data System (ADS)

    Blagorodnova, N.; Kotak, R.; Polshaw, J.; Kasliwal, M. M.; Cao, Y.; Cody, A. M.; Doran, G. B.; Elias-Rosa, N.; Fraser, M.; Fremling, C.; Gonzalez-Fernandez, C.; Harmanen, J.; Jencson, J.; Kankare, E.; Kudritzki, R.-P.; Kulkarni, S. R.; Magnier, E.; Manulis, I.; Masci, F. J.; Mattila, S.; Nugent, P.; Ochner, P.; Pastorello, A.; Reynolds, T.; Smith, K.; Sollerman, J.; Taddia, F.; Terreran, G.; Tomasella, L.; Turatto, M.; Vreeswijk, P. M.; Wozniak, P.; Zaggia, S.

    2017-07-01

    The location of M101-OT2015-1 has been serendipitously imaged by numerous telescopes and instruments over the last 15 years (from 2000 to 2015). Our best quality pre-discovery image (seeing of 0.55") is an r-band exposure at -3625 days pre-peak from the Canada-France-Hawaii Telescope (CFHT). The historical optical data for M101-OT was retrieved from the CFHT MegaPrime and CFHT12K/Mosaic, using single and combined exposures, Pan-STARRS-1/GPC1 (PS1), Isaac Newton Telescope/Wide Field Camera (INT/WFC), and Sloan Digital Sky Survey (SDSS) DR 10 (Ahn+ 2014, see V/147). Unfortunately, there are no HST images covering the location of the source. Post-discovery optical magnitudes were obtained from the reported followup astronomer's telegrams (ATels), Liverpool Telescope (LT), the Nordic Optical Telescope (NOT), and the Palomar P48 and P60 telescopes. The infrared data were retrieved from CFHT/WIRCam, UKIRT/WFCAM, and the Spitzer Infrared Array Camera in 3.6 and 4.5um as part of the SPitzer InfraRed Intensive Transients Survey (SPIRITS) (Kasliwal+, 2017ApJ...839...88K). Details of pre-discovery photometry and post-discovery optical photometry may be found in the Appendices Tables 1 and 2, respectively. We obtained spectra of M101-OT using a range of facilities in 2015 Feb-Jul. (3 data files).

  20. Overview of the Atacama Cosmology Telescope: Receiver, Instrumentation, and Telescope Systems

    NASA Astrophysics Data System (ADS)

    Swetz, D. S.; Ade, P. A. R.; Amiri, M.; Appel, J. W.; Battistelli, E. S.; Burger, B.; Chervenak, J.; Devlin, M. J.; Dicker, S. R.; Doriese, W. B.; Dünner, R.; Essinger-Hileman, T.; Fisher, R. P.; Fowler, J. W.; Halpern, M.; Hasselfield, M.; Hilton, G. C.; Hincks, A. D.; Irwin, K. D.; Jarosik, N.; Kaul, M.; Klein, J.; Lau, J. M.; Limon, M.; Marriage, T. A.; Marsden, D.; Martocci, K.; Mauskopf, P.; Moseley, H.; Netterfield, C. B.; Niemack, M. D.; Nolta, M. R.; Page, L. A.; Parker, L.; Staggs, S. T.; Stryzak, O.; Switzer, E. R.; Thornton, R.; Tucker, C.; Wollack, E.; Zhao, Y.

    2011-06-01

    The Atacama Cosmology Telescope was designed to measure small-scale anisotropies in the cosmic microwave background and detect galaxy clusters through the Sunyaev-Zel'dovich effect. The instrument is located on Cerro Toco in the Atacama Desert, at an altitude of 5190 m. A 6 m off-axis Gregorian telescope feeds a new type of cryogenic receiver, the Millimeter Bolometer Array Camera. The receiver features three 1000-element arrays of transition-edge sensor bolometers for observations at 148 GHz, 218 GHz, and 277 GHz. Each detector array is fed by free space millimeter-wave optics. Each frequency band has a field of view of approximately 22' × 26'. The telescope was commissioned in 2007 and has completed its third year of operations. We discuss the major components of the telescope, camera, and related systems, and summarize the instrument performance.

  1. The Atacama Cosmology Telescope: The Receiver and Instrumentation

    NASA Technical Reports Server (NTRS)

    Swetz, D. S.; Ade, P. A. R.; Amiri, M.; Appel, J. W.; Burger, B.; Devlin, M. J.; Dicker, S. R.; Doriese, W. B.; Essinger-Hileman, T.; Fisher, R. P.; hide

    2010-01-01

    The Atacama Cosmology Telescope was designed to measure small-scale anisotropies in the Cosmic Microwave Background and detect galaxy clusters through the Sunyaev-Zel'dovich effect. The instrument is located on Cerro Taco in the Atacama Desert, at an altitude of 5190 meters. A six-met.er off-axis Gregorian telescope feeds a new type of cryogenic receiver, the Millimeter Bolometer Array Camera. The receiver features three WOO-element arrays of transition-edge sensor bolometers for observations at 148 GHz, 218 GHz, and 277 GHz. Each detector array is fed by free space mm-wave optics. Each frequency band has a field of view of approximately 22' x 26'. The telescope was commissioned in 2007 and has completed its third year of operations. We discuss the major components of the telescope, camera, and related systems, and summarize the instrument performance.

  2. The Optical Green Valley Versus Mid-infrared Canyon in Compact Groups

    NASA Technical Reports Server (NTRS)

    Walker, Lisa May; Butterfield, Natalie; Johnson, Kelsey; Zucker, Catherine; Gallagher, Sarah; Konstantopoulos, Iraklis; Zabludoff, Ann; Hornschemeier, Ann E.; Tzanavaris, Panayiotis; Charlton, Jane C.

    2013-01-01

    Compact groups of galaxies provide conditions similar to those experienced by galaxies in the earlier universe. Recent work on compact groups has led to the discovery of a dearth of mid-infrared transition galaxies (MIRTGs) in Infrared Array Camera (3.6-8.0 micrometers) color space as well as at intermediate specific star formation rates. However, we find that in compact groups these MIRTGs have already transitioned to the optical ([g-r]) red sequence. We investigate the optical color-magnitude diagram (CMD) of 99 compact groups containing 348 galaxies and compare the optical CMD with mid-infrared (mid-IR) color space for compact group galaxies. Utilizing redshifts available from Sloan Digital Sky Survey, we identified new galaxy members for four groups. By combining optical and mid-IR data, we obtain information on both the dust and the stellar populations in compact group galaxies. We also compare with more isolated galaxies and galaxies in the Coma Cluster, which reveals that, similar to clusters, compact groups are dominated by optically red galaxies. While we find that compact group transition galaxies lie on the optical red sequence, LVL (Local Volume Legacy) + (plus) SINGS (Spitzer Infrared Nearby Galaxies Survey) mid-IR (infrared) transition galaxies span the range of optical colors. The dearth of mid-IR transition galaxies in compact groups may be due to a lack of moderately star-forming low mass galaxies; the relative lack of these galaxies could be due to their relatively small gravitational potential wells. This makes them more susceptible to this dynamic environment, thus causing them to more easily lose gas or be accreted by larger members.

  3. Spitzer Makes Invisible Visible

    NASA Image and Video Library

    2004-04-13

    Hidden behind a shroud of dust in the constellation Cygnus is a stellar nursery called DR21, which is giving birth to some of the most massive stars in our galaxy. Visible light images reveal no trace of this interstellar cauldron because of heavy dust obscuration. In fact, visible light is attenuated in DR21 by a factor of more than 10,000,000,000,000,000,000,000,000,000,000,000,000,000 (ten thousand trillion heptillion). New images from NASA's Spitzer Space Telescope allow us to peek behind the cosmic veil and pinpoint one of the most massive natal stars yet seen in our Milky Way galaxy. The never-before-seen star is 100,000 times as bright as the Sun. Also revealed for the first time is a powerful outflow of hot gas emanating from this star and bursting through a giant molecular cloud. The colorful image is a large-scale composite mosaic assembled from data collected at a variety of different wavelengths. Views at visible wavelengths appear blue, near-infrared light is depicted as green, and mid-infrared data from the InfraRed Array Camera (IRAC) aboard NASA's Spitzer Space Telescope is portrayed as red. The result is a contrast between structures seen in visible light (blue) and those observed in the infrared (yellow and red). A quick glance shows that most of the action in this image is revealed to the unique eyes of Spitzer. The image covers an area about two times that of a full moon. http://photojournal.jpl.nasa.gov/catalog/PIA05734

  4. VizieR Online Data Catalog: z>~5 AGN in Chandra Deep Field-South (Weigel+, 2015)

    NASA Astrophysics Data System (ADS)

    Weigel, A. K.; Schawinski, K.; Treister, E.; Urry, C. M.; Koss, M.; Trakhtenbrot, B.

    2015-09-01

    The Chandra 4-Ms source catalogue by Xue et al. (2011, Cat. J/ApJS/195/10) is the starting point of this work. It contains 740 sources and provides counts and observed frame fluxes in the soft (0.5-2keV), hard (2-8keV) and full (0.5-8keV) band. All object IDs used in this work refer to the source numbers listed in the Xue et al. (2011, Cat. J/ApJS/195/10) Chandra 4-Ms catalogue. We make use of Hubble Space Telescope (HST)/Advanced Camera for Surveys (ACS) data from the Great Observatories Origins Deep Survey South (GOODS-south) in the optical wavelength range. We use catalogues and images for filters F435W (B), F606W (V), F775W (i) and 850LP (z) from the second GOODS/ACS data release (v2.0; Giavalisco et al., 2004, Cat. II/261). We use Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) Wide Field Camera 3 (WFC3)/infrared data from the first data release (DR1, v1.0) for passbands F105W (Y), F125W (J) and F160W (H) (Grogin et al., 2011ApJS..197...35G; Koekemoer et al., 2011ApJS..197...36K). To determine which objects are red, dusty, low-redshift interlopers, we also include the 3.6 and 4.5 micron Spitzer Infrared Array Camera (IRAC) channels. We use SIMPLE image data from the DR1 (van Dokkum et al., 2005, Spitzer Proposal, 2005.20708) and the first version of the extended SIMPLE catalogue by Damen et al. (2011, Cat. J/ApJ/727/1). (6 data files).

  5. Using focused plenoptic cameras for rich image capture.

    PubMed

    Georgiev, T; Lumsdaine, A; Chunev, G

    2011-01-01

    This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture.

  6. LSST camera grid structure made out of ceramic composite material, HB-Cesic

    NASA Astrophysics Data System (ADS)

    Kroedel, Matthias R.; Langton, J. Bryan

    2016-08-01

    In this paper we are presenting the ceramic design and the fabrication of the camera structure which is using the unique manufacturing features of the HB-Cesic technology and associated with a dedicated metrology device in order to ensure the challenging flatness requirement of 4 micron over the full array.

  7. Demonstration of First 9 Micron cutoff 640 x 486 GaAs Based Quantum Well Infrared PhotoDetector (QWIP) Snap-Shot Camera

    NASA Technical Reports Server (NTRS)

    Gunapala, S.; Bandara, S. V.; Liu, J. K.; Hong, W.; Sundaram, M.; Maker, P. D.; Muller, R. E.

    1997-01-01

    In this paper, we discuss the development of this very sensitive long waelength infrared (LWIR) camera based on a GaAs/AlGaAs QWIP focal plane array (FPA) and its performance in quantum efficiency, NEAT, uniformity, and operability.

  8. An Intraocular Camera for Retinal Prostheses: Restoring Sight to the Blind

    NASA Astrophysics Data System (ADS)

    Stiles, Noelle R. B.; McIntosh, Benjamin P.; Nasiatka, Patrick J.; Hauer, Michelle C.; Weiland, James D.; Humayun, Mark S.; Tanguay, Armand R., Jr.

    Implantation of an intraocular retinal prosthesis represents one possible approach to the restoration of sight in those with minimal light perception due to photoreceptor degenerating diseases such as retinitis pigmentosa and age-related macular degeneration. In such an intraocular retinal prosthesis, a microstimulator array attached to the retina is used to electrically stimulate still-viable retinal ganglion cells that transmit retinotopic image information to the visual cortex by means of the optic nerve, thereby creating an image percept. We describe herein an intraocular camera that is designed to be implanted in the crystalline lens sac and connected to the microstimulator array. Replacement of an extraocular (head-mounted) camera with the intraocular camera restores the natural coupling of head and eye motion associated with foveation, thereby enhancing visual acquisition, navigation, and mobility tasks. This research is in no small part inspired by the unique scientific style and research methodologies that many of us have learned from Prof. Richard K. Chang of Yale University, and is included herein as an example of the extent and breadth of his impact and legacy.

  9. Status of the NectarCAM camera project

    NASA Astrophysics Data System (ADS)

    Glicenstein, J.-F.; Barcelo, M.; Barrio, J.-A.; Blanch, O.; Boix, J.; Bolmont, J.; Boutonnet, C.; Brun, P.; Chabanne, E.; Champion, C.; Colonges, S.; Corona, P.; Courty, B.; Delagnes, E.; Delgado, C.; Diaz, C.; Ernenwein, J.-P.; Fegan, S.; Ferreira, O.; Fesquet, M.; Fontaine, G.; Fouque, N.; Henault, F.; Gascón, D.; Giebels, B.; Herranz, D.; Hermel, R.; Hoffmann, D.; Horan, D.; Houles, J.; Jean, P.; Karkar, S.; Knödlseder, J.; Martinez, G.; Lamanna, G.; LeFlour, T.; Lévêque, A.; Lopez-Coto, R.; Louis, F.; Moudden, Y.; Moulin, E.; Nayman, P.; Nunio, F.; Olive, J.-F.; Panazol, J.-L.; Pavy, S.; Petrucci, P.-O.; Punch, M.; Prast, Julie; Ramon, P.; Rateau, S.; Ribó, M.; Rosier-Lees, S.; Sanuy, A.; Sizun, P.; Sieiro, J.; Sulanke, K.-H.; Tavernet, J.-P.; Tejedor, L. A.; Toussenel, F.; Vasileiadis, G.; Voisin, V.; Waegebert, V.; Zurbach, C.

    2014-07-01

    NectarCAM is a camera designed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) covering the central energy range 100 GeV to 30 TeV. It has a modular design based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 7 to 8 degrees. Each module includes the photomultiplier bases, High Voltage supply, pre-amplifier, trigger, readout and Thernet transceiver. Events recorded last between a few nanoseconds and tens of nanoseconds. A flexible trigger scheme allows to read out very long events. NectarCAM can sustain a data rate of 10 kHz. The camera concept, the design and tests of the various subcomponents and results of thermal and electrical prototypes are presented. The design includes the mechanical structure, the cooling of electronics, read-out, clock distribution, slow control, data-acquisition, trigger, monitoring and services. A 133-pixel prototype with full scale mechanics, cooling, data acquisition and slow control will be built at the end of 2014.

  10. Optical fiducial timing system for X-ray streak cameras with aluminum coated optical fiber ends

    DOEpatents

    Nilson, David G.; Campbell, E. Michael; MacGowan, Brian J.; Medecki, Hector

    1988-01-01

    An optical fiducial timing system is provided for use with interdependent groups of X-ray streak cameras (18). The aluminum coated (80) ends of optical fibers (78) are positioned with the photocathodes (20, 60, 70) of the X-ray streak cameras (18). The other ends of the optical fibers (78) are placed together in a bundled array (90). A fiducial optical signal (96), that is comprised of 2.omega. or 1.omega. laser light, after introduction to the bundled array (90), travels to the aluminum coated (82) optical fiber ends and ejects quantities of electrons (84) that are recorded on the data recording media (52) of the X-ray streak cameras (18). Since both 2.omega. and 1.omega. laser light can travel long distances in optical fiber with only a slight attenuation, the initial arial power density of the fiducial optical signal (96) is well below the damage threshold of the fused silica or other material that comprises the optical fibers (78, 90). Thus the fiducial timing system can be repeatably used over long durations of time.

  11. Cryogenic solid Schmidt camera as a base for future wide-field IR systems

    NASA Astrophysics Data System (ADS)

    Yudin, Alexey N.

    2011-11-01

    Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.

  12. Design of a multi-spectral imager built using the compressive sensing single-pixel camera architecture

    NASA Astrophysics Data System (ADS)

    McMackin, Lenore; Herman, Matthew A.; Weston, Tyler

    2016-02-01

    We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.

  13. Highly reproducible and sensitive silver nanorod array for the rapid detection of Allura Red in candy

    NASA Astrophysics Data System (ADS)

    Yao, Yue; Wang, Wen; Tian, Kangzhen; Ingram, Whitney Marvella; Cheng, Jie; Qu, Lulu; Li, Haitao; Han, Caiqin

    2018-04-01

    Allura Red (AR) is a highly stable synthetic red azo dye, which is widely used in the food industry to dye food and increase its attraction to consumers. However, the excessive consumption of AR can result in adverse health effects to humans. Therefore, a highly reproducible silver nanorod (AgNR) array was developed for surface enhanced Raman scattering (SERS) detection of AR in candy. The relative standard deviation (RSD) of AgNR substrate obtained from the same batch and different batches were 5.7% and 11.0%, respectively, demonstrating the high reproducibility. Using these highly reproducible AgNR arrays as the SERS substrates, AR was detected successfully, and its characteristic peaks were assigned by the density function theory (DFT) calculation. The limit of detection (LOD) of AR was determined to be 0.05 mg/L with a wide linear range of 0.8-100 mg/L. Furthermore, the AgNR SERS arrays can detect AR directly in different candy samples within 3 min without any complicated pretreatment. These results suggest the AgNR array can be used for rapid and qualitative SERS detection of AR, holding a great promise for expanding SERS application in food safety control field.

  14. Simultaneous multispectral framing infrared camera using an embedded diffractive optical lenslet array

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele

    2011-06-01

    Recent advances in micro-optical element fabrication using gray scale technology have opened up the opportunity to create simultaneous multi-spectral imaging with fine structure diffractive lenses. This paper will discuss an approach that uses diffractive optical lenses configured in an array (lenslet array) and placed in close proximity to the focal plane array which enables a small compact simultaneous multispectral imaging camera [1]. The lenslet array is designed so that all lenslets have a common focal length with each lenslet tuned for a different wavelength. The number of simultaneous spectral images is determined by the number of individually configured lenslets in the array. The number of spectral images can be increased by a factor of 2 when using it with a dual-band focal plane array (MWIR/LWIR) by exploiting multiple diffraction orders. In addition, modulation of the focal length of the lenslet array with piezoelectric actuation will enable spectral bin fill-in allowing additional spectral coverage while giving up simultaneity. Different lenslet array spectral imaging concept designs are presented in this paper along with a unique concept for prefiltering the radiation focused on the detector. This approach to spectral imaging has applications in the detection of chemical agents in both aerosolized form and as a liquid on a surface. It also can be applied to the detection of weaponized biological agent and IED detection in various forms from manufacturing to deployment and post detection during forensic analysis.

  15. Thermal Imaging with Novel Infrared Focal Plane Arrays and Quantitative Analysis of Thermal Imagery

    NASA Technical Reports Server (NTRS)

    Gunapala, S. D.; Rafol, S. B.; Bandara, S. V.; Liu, J. K.; Mumolo, J. M.; Soibel, A.; Ting, D. Z.; Tidrow, Meimei

    2012-01-01

    We have developed a single long-wavelength infrared (LWIR) quantum well infrared photodetector (QWIP) camera for thermography. This camera has been used to measure the temperature profile of patients. A pixel coregistered simultaneously reading mid-wavelength infrared (MWIR)/LWIR dual-band QWIP camera was developed to improve the accuracy of temperature measurements especially with objects with unknown emissivity. Even the dualband measurement can provide inaccurate results due to the fact that emissivity is a function of wavelength. Thus we have been developing a four-band QWIP camera for accurate temperature measurement of remote object.

  16. Use of digital micromirror devices as dynamic pinhole arrays for adaptive confocal fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Pozzi, Paolo; Wilding, Dean; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel

    2018-02-01

    In this work, we present a new confocal laser scanning microscope capable to perform sensorless wavefront optimization in real time. The device is a parallelized laser scanning microscope in which the excitation light is structured in a lattice of spots by a spatial light modulator, while a deformable mirror provides aberration correction and scanning. A binary DMD is positioned in an image plane of the detection optical path, acting as a dynamic array of reflective confocal pinholes, images by a high performance cmos camera. A second camera detects images of the light rejected by the pinholes for sensorless aberration correction.

  17. Optimization of transition edge sensor arrays for cosmic microwave background observations with the south pole telescope

    DOE PAGES

    Ding, Junjia; Ade, P. A. R.; Anderson, A. J.; ...

    2016-12-15

    In this study, we describe the optimization of transition-edge-sensor (TES) detector arrays for the thirdgeneration camera for the South PoleTelescope.The camera,which contains ~16 000 detectors, will make high-angular-resolution maps of the temperature and polarization of the cosmic microwave background. Our key results are scatter in the transition temperature of Ti/Au TESs is reduced by fabricating the TESs on a thin Ti(5 nm)/Au(5 nm) buffer layer and the thermal conductivity of the legs that support our detector islands is dominated by the SiOx dielectric in the microstrip transmission lines that run along

  18. Real-time, T-ray imaging using a sub-terahertz gyrotron

    NASA Astrophysics Data System (ADS)

    Han, Seong-Tae; Torrezan, Antonio C.; Sirigiri, Jagadishwar R.; Shapiro, Michael A.; Temkin, Richard J.

    2012-06-01

    We demonstrated real-time, active, T-ray imaging using a 0.46 THz gyrotron capable of producing 16 W in continuous wave operation and a pyroelectric array camera with 124-by-124 pixels. An expanded Gaussian beam from the gyrotron was used to maintain the power density above the detection level of the pyroelectric array over the area of the irradiated object. Real-time imaging at a video rate of 48 Hz was achieved through the use of the built-in chopper of the camera. Potential applications include fast scanning for security purposes and for quality control of dry or frozen foods.

  19. The gamma-ray Cherenkov telescope for the Cherenkov telescope array

    NASA Astrophysics Data System (ADS)

    Tibaldo, L.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kraus, M.; Lapington, J. S.; Laporte, P.; Lefaucheur, J.; Markoff, S.; Melse, T.; Mohrmann, L.; Molyneux, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayède, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Trichard, C.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-01-01

    The Cherenkov Telescope Array (CTA) is a forthcoming ground-based observatory for very-high-energy gamma rays. CTA will consist of two arrays of imaging atmospheric Cherenkov telescopes in the Northern and Southern hemispheres, and will combine telescopes of different types to achieve unprecedented performance and energy coverage. The Gamma-ray Cherenkov Telescope (GCT) is one of the small-sized telescopes proposed for CTA to explore the energy range from a few TeV to hundreds of TeV with a field of view ≳ 8° and angular resolution of a few arcminutes. The GCT design features dual-mirror Schwarzschild-Couder optics and a compact camera based on densely-pixelated photodetectors as well as custom electronics. In this contribution we provide an overview of the GCT project with focus on prototype development and testing that is currently ongoing. We present results obtained during the first on-telescope campaign in late 2015 at the Observatoire de Paris-Meudon, during which we recorded the first Cherenkov images from atmospheric showers with the GCT multi-anode photomultiplier camera prototype. We also discuss the development of a second GCT camera prototype with silicon photomultipliers as photosensors, and plans toward a contribution to the realisation of CTA.

  20. Lensfree microscopy on a cellphone

    PubMed Central

    Tseng, Derek; Mudanyali, Onur; Oztoprak, Cetin; Isikman, Serhan O.; Sencan, Ikbal; Yaglidere, Oguzhan; Ozcan, Aydogan

    2010-01-01

    We demonstrate lensfree digital microscopy on a cellphone. This compact and light-weight holographic microscope installed on a cellphone does not utilize any lenses, lasers or other bulky optical components and it may offer a cost-effective tool for telemedicine applications to address various global health challenges. Weighing ~38 grams (<1.4 ounces), this lensfree imaging platform can be mechanically attached to the camera unit of a cellphone where the samples are loaded from the side, and are vertically illuminated by a simple light-emitting diode (LED). This incoherent LED light is then scattered from each micro-object to coherently interfere with the background light, creating the lensfree hologram of each object on the detector array of the cellphone. These holographic signatures captured by the cellphone permit reconstruction of microscopic images of the objects through rapid digital processing. We report the performance of this lensfree cellphone microscope by imaging various sized micro-particles, as well as red blood cells, white blood cells, platelets and a waterborne parasite (Giardia lamblia). PMID:20445943

  1. VizieR Online Data Catalog: YSO candidates in the Magellanic Bridge (Chen+, 2014)

    NASA Astrophysics Data System (ADS)

    Chen, C.-H. R.; Indebetouw, R.; Muller, E.; Kawamura, A.; Gordon, K. D.; Sewilo, M.; Whitney, B. A.; Fukui, Y.; Madden, S. C.; Meade, M. R.; Meixner, M.; Oliveira, J. M.; Robitaille, T. P.; Seale, J. P.; Shiao, B.; van Loon, J. Th.

    2017-06-01

    The Spitzer observations of the Bridge were obtained as part of the Legacy Program "Surveying the Agents of Galaxy Evolution in the Tidally-Stripped, Low-Metallicity Small Magellanic Cloud" (SAGE-SMC; Gordon et al. 2011AJ....142..102G). These observations included images taken at 3.6, 4.5, 5.8, and 8.0 um bands with the InfraRed Array Camera (IRAC) and at 24, 70, and 160 um bands with the Multiband Imaging Photometer for Spitzer (MIPS). The details of data processing are given in Gordon et al. (2011AJ....142..102G). To construct multi-wavelength SEDs for sources in the Spitzer catalog, we have expanded it by adding photometry from optical and NIR surveys covering the Bridge, i.e., BRI photometry from the Super COSMOS Sky Surveys (SSS; Hambly et al. 2001MNRAS.326.1279H) and JHKs photometry from the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006AJ....131.1163S, Cat. VII/233). (5 data files).

  2. Spatially Controlled Fabrication of Brightly Fluorescent Nanodiamond-Array with Enhanced Far-Red Si-V Luminescence

    PubMed Central

    Singh, Sonal; Thomas, Vinoy; Martyshkin, Dmitry; Kozlovskaya, Veronika; Kharlampieva, Eugenia

    2014-01-01

    We demonstrate a novel approach to precise pattern fluorescent nanodiamond-arrays with enhanced far-red intense photostable luminescence from silicon-vacancy (Si-V) defect centers. The precision-patterned pre-growth seeding of nanodiamonds is achieved by scanning probe “Dip-Pen” nanolithography technique using electrostatically-driven transfer of nanodiamonds from “inked” cantilevers to a UV-treated hydrophilic SiO2 substrate. The enhanced emission from nanodiamond-dots in the far-red is achieved by incorporating Si-V defect centers in subsequent chemical vapor deposition treatment. The development of a suitable nanodiamond ink, mechanism of ink transport, and effect of humidity, dwell time on nanodiamond patterning are investigated. The precision-patterning of as-printed (pre-CVD) arrays with dot diameter and dot height as small as 735 nm ± 27 nm, 61 nm ± 3 nm, respectively and CVD-treated fluorescent ND-arrays with consistently patterned dots having diameter and height as small as 820 nm ± 20 nm, 245 nm ± 23 nm, respectively using 1 s dwell time and 30% RH is successfully achieved. We anticipate that the far-red intense photostable luminescence (~738 nm) observed from Si-V defect centers integrated in spatially arranged nanodiamonds could be beneficial for the development of the next generation fluorescent based devices and applications. PMID:24394286

  3. A laboratory verification sensor

    NASA Technical Reports Server (NTRS)

    Vaughan, Arthur H.

    1988-01-01

    The use of a variant of the Hartmann test is described to sense the coalignment of the 36 primary mirror segments of the Keck 10-meter Telescope. The Shack-Hartmann alignment camera is a surface-tilt-error-sensing device, operable with high sensitivity over a wide range of tilt errors. An interferometer, on the other hand, is a surface-height-error-sensing device. In general, if the surface height error exceeds a few wavelengths of the incident illumination, an interferogram is difficult to interpret and loses utility. The Shack-Hartmann aligment camera is, therefore, likely to be attractive as a development tool for segmented mirror telescopes, particularly at early stages of development in which the surface quality of developmental segments may be too poor to justify interferometric testing. The constraints are examined which would define the first-order properties of a Shack-Hartmann alignment camera and the precision and range of measurement one could expect to achieve with it are investigated. Fundamental constraints do arise, however, from consideration of geometrical imaging, diffraction, and the density of sampling of images at the detector array. Geometrical imagining determines the linear size of the image, and depends on the primary mirror diameter and the f-number of a lenslet. Diffraction is another constraint; it depends on the lenslet aperture. Finally, the sampling density at the detector array is important since the number of pixels in the image determines how accurately the centroid of the image can be measured. When these factors are considered under realistic assumptions it is apparent that the first order design of a Shack-Hartmann alignment camera is completely determined by the first-order constraints considered, and that in the case of a 20-meter telescope with seeing-limited imaging, such a camera, used with a suitable detector array, will achieve useful precision.

  4. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    ERIC Educational Resources Information Center

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  5. Noncontact imaging of plethysmographic pulsation and spontaneous low-frequency oscillation in skin perfusion with a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa

    2016-03-01

    A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.

  6. Predicting plasmonic coupling with Mie-Gans theory in silver nanoparticle arrays

    NASA Astrophysics Data System (ADS)

    Ranjan, M.

    2013-09-01

    Plasmonic coupling is observed in the self-aligned arrays of silver nanoparticles grown on ripple-patterned substrate. Large differences observed in the plasmon resonance wavelength, measured and calculated using Mie-Gans theory, predict that strong plasmonic coupling exists in the nanoparticles arrays. Even though plasmonic coupling exists both along and across the arrays, but it is found to be much stronger along the arrays due to shorter interparticle gap and particle elongation. This effect is responsible for observed optical anisotropy in such arrays. Measured red-shift even in the transverse plasmon resonance mode with the increasing nanoparticles aspect ratio in the arrays, deviate from the prediction of Mie-Gans theory. This essentially means that plasmonic coupling is dominating over the shape anisotropy. Plasmon resonance tuning is presented by varying the plasmonic coupling systematically with nanoparticles aspect ratio and ripple wavelength. Plasmon resonance red-shifts with the increasing aspect ratio along the ripple, and blue-shifts with the increasing ripple wavelength across the ripple. Therefore, reported bottom-up approach for fabricating large area-coupled nanoparticle arrays can be used for various field enhancement-based plasmonic applications.

  7. Defining ray sets for the analysis of lenslet-based optical systems including plenoptic cameras and Shack-Hartmann wavefront sensors

    NASA Astrophysics Data System (ADS)

    Moore, Lori

    Plenoptic cameras and Shack-Hartmann wavefront sensors are lenslet-based optical systems that do not form a conventional image. The addition of a lens array into these systems allows for the aberrations generated by the combination of the object and the optical components located prior to the lens array to be measured or corrected with post-processing. This dissertation provides a ray selection method to determine the rays that pass through each lenslet in a lenslet-based system. This first-order, ray trace method is developed for any lenslet-based system with a well-defined fore optic, where in this dissertation the fore optic is all of the optical components located prior to the lens array. For example, in a plenoptic camera the fore optic is a standard camera lens. Because a lens array at any location after the exit pupil of the fore optic is considered in this analysis, it is applicable to both plenoptic cameras and Shack-Hartmann wavefront sensors. Only a generic, unaberrated fore optic is considered, but this dissertation establishes a framework for considering the effect of an aberrated fore optic in lenslet-based systems. The rays from the fore optic that pass through a lenslet placed at any location after the fore optic are determined. This collection of rays is reduced to three rays that describe the entire lenslet ray set. The lenslet ray set is determined at the object, image, and pupil planes of the fore optic. The consideration of the apertures that define the lenslet ray set for an on-axis lenslet leads to three classes of lenslet-based systems. Vignetting of the lenslet rays is considered for off-axis lenslets. Finally, the lenslet ray set is normalized into terms similar to the field and aperture vector used to describe the aberrated wavefront of the fore optic. The analysis in this dissertation is complementary to other first-order models that have been developed for a specific plenoptic camera layout or Shack-Hartmann wavefront sensor application. This general analysis determines the location where the rays of each lenslet pass through the fore optic establishing a framework to consider the effect of an aberrated fore optic in a future analysis.

  8. Large format geiger-mode avalanche photodiode LADAR camera

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison

    2013-05-01

    Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.

  9. Polarimetric Imaging System for Automatic Target Detection and Recognition

    DTIC Science & Technology

    2000-03-01

    technique shown in Figure 4(b) can also be used to integrate polarizer arrays with other types of imaging sensors, such as LWIR cameras and uncooled...vertical stripe pattern in this φ image is caused by nonuniformities in the particular polarizer array used. 2. CIRCULAR POLARIZATION IMAGING USING

  10. The Sydney University PAPA camera

    NASA Astrophysics Data System (ADS)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  11. Leading Edge. Sensors Challenges and Solutions for the 21st Century. Volume 7, Issue Number 2

    DTIC Science & Technology

    2010-01-01

    above, microbolometer technology is not very sen- sitive. To gain sensitivity, one needs to go to IR cam- eras that have cryogenically cooled detector ...QWIP) and detector arrays made from mercury cadmium telluride ( MCT ). Both types can be very sensitive. QWIP cameras have spectral detection bands...commercially available IR camera to meet the needs of CAPTC. One MCT camera was located that had a detection band from 7.7 µ to 11.6 µ and included an

  12. Ten-Meter Scale Topography and Roughness of Mars Exploration Rovers Landing Sites and Martian Polar Regions

    NASA Technical Reports Server (NTRS)

    Ivanov, Anton B.

    2003-01-01

    The Mars Orbiter Camera (MOC) has been operating on board of the Mars Global Surveyor (MGS) spacecraft since 1998. It consists of three cameras - Red and Blue Wide Angle cameras (FOV=140 deg.) and Narrow Angle camera (FOV=0.44 deg.). The Wide Angle camera allows surface resolution down to 230 m/pixel and the Narrow Angle camera - down to 1.5 m/pixel. This work is a continuation of the project, which we have reported previously. Since then we have refined and improved our stereo correlation algorithm and have processed many more stereo pairs. We will discuss results of our stereo pair analysis located in the Mars Exploration rovers (MER) landing sites and address feasibility of recovering topography from stereo pairs (especially in the polar regions), taken during MGS 'Relay-16' mode.

  13. VizieR Online Data Catalog: Galactic CHaMP. II. Dense gas clumps. (Ma+, 2013)

    NASA Astrophysics Data System (ADS)

    Ma, B.; Tan, J. C.; Barnes, P. J.

    2015-04-01

    A total of 303 dense gas clumps have been detected using the HCO+(1-0) line in the CHaMP survey (Paper I, Barnes et al. 2011, J/ApJS/196/12). In this article we have derived the SED for these clumps using Spitzer, MSX, and IRAS data. The Midcourse Space Experiment (MSX) was launched in 1996 April. It conducted a Galactic plane survey (0

  14. DTO 1118 - Damaged Spektr solar array

    NASA Image and Video Library

    1998-03-04

    S89-E-5190 (25 Jan 1998) --- This Electronic Still Camera (ESC) image shows the Russian Mir Space Station's damaged solar array panel. The solar array panel was damaged as a result of an impact with an unmanned Progress re-supply ship which collided with the Mir on June 25, 1997, causing the Spektr Module to depressurize. This ESC view was taken on January 25, 1998 at 16:56:30 GMT.

  15. Arthropod prey of nestling red-cockaded woodpeckers in the upper coastal plain of South Carolina

    Treesearch

    James L. Hanula; Kathleen E. Franzreb

    1995-01-01

    Four nest cavities of the Red-cockaded Woodpecker (Picoides borealis) were monitored with automatic cameras to determine the prey selected to feed nestlings. Twelve adults were photographed making nearly 3000 nest visits. Prey in 28 arthropod taxa were recognizable in 65% of the photographic slides. Wood roaches in the genus (Parcoblutta...

  16. Availability and abundance of prey for the red-cockaded woodpecker

    Treesearch

    James L. Hanula; Scott Horn

    2004-01-01

    Over a 10-year period we investigated red-cockaded woodpecker (Picoides borealis) prey use, sources of prey, prey distribution within trees and stands, and how forest management decisions affect prey abundance in South Carolina, Alabama, Georgia, and Florida. Cameras were operated at 31 nest cavities to record nest visits with prey in 4 locations...

  17. Activity patterns of American martens, fishers, snowshoe hares, and red squirrels in westcentral Montana

    Treesearch

    Kerry R. Foresman; Dean Pearson

    1999-01-01

    We investigated winter activity patterns of American Martens, Martes americana, Snowshoe Hares, Lepus americanus, and Red Squirrels, Tamiasciurus hudsonicus, in westcentral Montana between November 1994 and March 1995 using dual-sensor remote cameras. One hundred percent of Snowshoe Hare (n = 25) observations occurred at night while Martens (n = 85) exhibited...

  18. Diet of nestling red-cockaded woodpeckers at three locations

    Treesearch

    James L. Hanula; Donald Lipscomb; Kathleen E. Franzreb; Susan C. Loeb

    2000-01-01

    We conducted a 2-yr study of the nestling diet of red-cockaded woodpeckers (Picoides borealis) at three locations to determine how it varied among sites. We photographed 5939 nest visits by adult woodpeckers delivering food items for nestlings. In 1994, we located cameras near three nest cavities on the Lower Coastal Plain of South Carolina and near...

  19. Single-pixel camera with one graphene photodetector.

    PubMed

    Li, Gongxin; Wang, Wenxue; Wang, Yuechao; Yang, Wenguang; Liu, Lianqing

    2016-01-11

    Consumer cameras in the megapixel range are ubiquitous, but the improvement of them is hindered by the poor performance and high cost of traditional photodetectors. Graphene, a two-dimensional micro-/nano-material, recently has exhibited exceptional properties as a sensing element in a photodetector over traditional materials. However, it is difficult to fabricate a large-scale array of graphene photodetectors to replace the traditional photodetector array. To take full advantage of the unique characteristics of the graphene photodetector, in this study we integrated a graphene photodetector in a single-pixel camera based on compressive sensing. To begin with, we introduced a method called laser scribing for fabrication the graphene. It produces the graphene components in arbitrary patterns more quickly without photoresist contamination as do traditional methods. Next, we proposed a system for calibrating the optoelectrical properties of micro/nano photodetectors based on a digital micromirror device (DMD), which changes the light intensity by controlling the number of individual micromirrors positioned at + 12°. The calibration sensitivity is driven by the sum of all micromirrors of the DMD and can be as high as 10(-5)A/W. Finally, the single-pixel camera integrated with one graphene photodetector was used to recover a static image to demonstrate the feasibility of the single-pixel imaging system with the graphene photodetector. A high-resolution image can be recovered with the camera at a sampling rate much less than Nyquist rate. The study was the first demonstration for ever record of a macroscopic camera with a graphene photodetector. The camera has the potential for high-speed and high-resolution imaging at much less cost than traditional megapixel cameras.

  20. TIFR Near Infrared Imaging Camera-II on the 3.6 m Devasthal Optical Telescope

    NASA Astrophysics Data System (ADS)

    Baug, T.; Ojha, D. K.; Ghosh, S. K.; Sharma, S.; Pandey, A. K.; Kumar, Brijesh; Ghosh, Arpan; Ninan, J. P.; Naik, M. B.; D’Costa, S. L. A.; Poojary, S. S.; Sandimani, P. R.; Shah, H.; Krishna Reddy, B.; Pandey, S. B.; Chand, H.

    Tata Institute of Fundamental Research (TIFR) Near Infrared Imaging Camera-II (TIRCAM2) is a closed-cycle Helium cryo-cooled imaging camera equipped with a Raytheon 512×512 pixels InSb Aladdin III Quadrant focal plane array (FPA) having sensitivity to photons in the 1-5μm wavelength band. In this paper, we present the performance of the camera on the newly installed 3.6m Devasthal Optical Telescope (DOT) based on the calibration observations carried out during 2017 May 11-14 and 2017 October 7-31. After the preliminary characterization, the camera has been released to the Indian and Belgian astronomical community for science observations since 2017 May. The camera offers a field-of-view (FoV) of ˜86.5‧‧×86.5‧‧ on the DOT with a pixel scale of 0.169‧‧. The seeing at the telescope site in the near-infrared (NIR) bands is typically sub-arcsecond with the best seeing of ˜0.45‧‧ realized in the NIR K-band on 2017 October 16. The camera is found to be capable of deep observations in the J, H and K bands comparable to other 4m class telescopes available world-wide. Another highlight of this camera is the observational capability for sources up to Wide-field Infrared Survey Explorer (WISE) W1-band (3.4μm) magnitudes of 9.2 in the narrow L-band (nbL; λcen˜ 3.59μm). Hence, the camera could be a good complementary instrument to observe the bright nbL-band sources that are saturated in the Spitzer-Infrared Array Camera (IRAC) ([3.6] ≲ 7.92 mag) and the WISE W1-band ([3.4] ≲ 8.1 mag). Sources with strong polycyclic aromatic hydrocarbon (PAH) emission at 3.3μm are also detected. Details of the observations and estimated parameters are presented in this paper.

  1. Spectrally resolved laser interference microscopy

    NASA Astrophysics Data System (ADS)

    Butola, Ankit; Ahmad, Azeem; Dubey, Vishesh; Senthilkumaran, P.; Singh Mehta, Dalip

    2018-07-01

    We developed a new quantitative phase microscopy technique, namely, spectrally resolved laser interference microscopy (SR-LIM), with which it is possible to quantify multi-spectral phase information related to biological specimens without color crosstalk using a color CCD camera. It is a single shot technique where sequential switched on/off of red, green, and blue (RGB) wavelength light sources are not required. The method is implemented using a three-wavelength interference microscope and a customized compact grating based imaging spectrometer fitted at the output port. The results of the USAF resolution chart while employing three different light sources, namely, a halogen lamp, light emitting diodes, and lasers, are discussed and compared. The broadband light sources like the halogen lamp and light emitting diodes lead to stretching in the spectrally decomposed images, whereas it is not observed in the case of narrow-band light sources, i.e. lasers. The proposed technique is further successfully employed for single-shot quantitative phase imaging of human red blood cells at three wavelengths simultaneously without color crosstalk. Using the present technique, one can also use a monochrome camera, even though the experiments are performed using multi-color light sources. Finally, SR-LIM is not only limited to RGB wavelengths, it can be further extended to red, near infra-red, and infra-red wavelengths, which are suitable for various biological applications.

  2. Radiometric calibration of an ultra-compact microbolometer thermal imaging module

    NASA Astrophysics Data System (ADS)

    Riesland, David W.; Nugent, Paul W.; Laurie, Seth; Shaw, Joseph A.

    2017-05-01

    As microbolometer focal plane array formats are steadily decreasing, new challenges arise in correcting for thermal drift in the calibration coefficients. As the thermal mass of the cameras decrease the focal plane becomes more sensitive to external thermal inputs. This paper shows results from a temperature compensation algorithm for characterizing and radiometrically calibrating a FLIR Lepton camera.

  3. Integrating motion-detection cameras and hair snags for wolverine identification

    Treesearch

    Audrey J. Magoun; Clinton D. Long; Michael K. Schwartz; Kristine L. Pilgrim; Richard E. Lowell; Patrick Valkenburg

    2011-01-01

    We developed an integrated system for photographing a wolverine's (Gulo gulo) ventral pattern while concurrently collecting hair for microsatellite DNA genotyping. Our objectives were to 1) test the system on a wild population of wolverines using an array of camera and hair-snag (C&H) stations in forested habitat where wolverines were known to occur, 2)...

  4. Laser-Sharp Jet Splits Water

    NASA Technical Reports Server (NTRS)

    2008-01-01

    A jet of gas firing out of a very young star can be seen ramming into a wall of material in this infrared image from NASA's Spitzer Space Telescope.

    The young star, called HH 211-mm, is cloaked in dust and can't be seen. But streaming away from the star are bipolar jets, color-coded blue in this view. The pink blob at the end of the jet to the lower left shows where the jet is hitting a wall of material. The jet is hitting the wall so hard that shock waves are being generated, which causes ice to vaporize off dust grains. The shock waves are also heating material up, producing energetic ultraviolet radiation. The ultraviolet radiation then breaks the water vapor molecules apart.

    The red color at the end of the lower jet represents shock-heated iron, sulfur and dust, while the blue color in both jets denotes shock-heated hydrogen molecules.

    HH 211-mm is part of a cluster of about 300 stars, called IC 348, located 1,000 light-years away in the constellation Perseus.

    This image is a composite of infrared data from Spitzer's infrared array camera and its multiband imaging photometer. Light with wavelengths of 3.6 and 4.5 microns is blue; 8-micron-light is green; and 24-micron light is red.

  5. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    PubMed

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  6. Sojourner Rover Near The Dice

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lander image of rover near The Dice (three small rocks behind the rover) and Yogi on sol 22. Color (red, green, and blue filters at 6:1 compression) image shows dark rocks, bright red dust, dark red soil exposed in rover tracks, and dark (black) soil. The APXS is in view at the rear of the vehicle, and the forward stereo cameras and laser light stripers are in shadow just below the front edge of the solar panel.

    NOTE: original caption as published in Science Magazine

  7. "Stereo Compton cameras" for the 3-D localization of radioisotopes

    NASA Astrophysics Data System (ADS)

    Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.

    2014-11-01

    The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.

  8. Development of an intraoperative gamma camera based on a 256-pixel mercuric iodide detector array

    NASA Astrophysics Data System (ADS)

    Patt, B. E.; Tornai, M. P.; Iwanczyk, J. S.; Levin, C. S.; Hoffman, E. J.

    1997-06-01

    A 256-element mercuric iodide (HgI/sub 2/) detector array has been developed which is intended for use as an intraoperative gamma camera (IOGC). The camera is specifically designed for use in imaging gamma-emitting radiopharmaceuticals (such as 99m-Tc labeled Sestamibi) incorporated into brain tumors in the intraoperative surgical environment. The system is intended to improve the success of tumor removal surgeries by allowing more complete removal of subclinical tumor cells without removal of excessive normal tissue. The use of HgI/sub 2/ detector arrays in this application facilitates construction of an imaging head that is very compact and has a high SNR. The detector is configured as a cross-grid array. Pixel dimensions are 1.25 mm squares separated by 0.25 mm. The overall dimension of the detector is 23.75 mm on a side. The detector thickness is 1 mm which corresponds to over 60% stopping at 140 keV. The array has good uniformity with average energy resolution of 5.2/spl plusmn/2.9% FWHM at 140 keV (best resolution was 1.9% FWHM). Response uniformity (/spl plusmn//spl sigma/) was 7.9%. A study utilizing realistic tumor phantoms (uptake ratio varied from 2:1 to 100:1) in background (1 mCi/l) was conducted. SNRs for the reasonably achievable uptake ratio of 50:1 were 5.61 /spl sigma/ with 1 cm of background depth ("normal tissue") and 2.74 /spl sigma/ with 4 cm of background for a 6.3 /spl mu/l tumor phantom (/spl sim/270 nCi at the time of the measurement).

  9. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  10. Astronaut Kathryn Thornton on HST photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-05

    S61-E-011 (5 Dec 1993) --- This view of astronaut Kathryn C. Thornton working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is installing the +V2 Solar Array Panel as a replacement for the original one removed earlier. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  11. GCT, the Gamma-ray Cherenkov Telescope for multi-TeV science with the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Sol, H.; Dournaux, J.-L.; Laporte, P.

    2016-12-01

    GCT is a gamma-ray telescope proposed for the high-energy section of the Cherenkov Telescope Array (CTA). A GCT prototype telescope has been designed, built and installed at the Observatoire de Paris in Meudon. Equipped with the first GCT prototype camera developed by an international collaboration, the complete GCT prototype was inaugurated in December 2015, after getting its first Cherenkov light on the night sky in November. The phase of tests, assessment, and optimisation is now coming to an end. Pre-production of the first GCT telescopes and cameras should start in 2017, for an installation on the Chilean site of CTA in 2018.

  12. Images of the future - Two decades in astronomy

    NASA Technical Reports Server (NTRS)

    Weistrop, D.

    1982-01-01

    Future instruments for the 100-10,000 A UV-wavelength region will require detectors with greater quantum efficiency, smaller picture elements, a greater wavelength range, and greater active area than those currently available. After assessing the development status and performance characteristics of vidicons, image tubes, electronographic cameras, digicons, silicon arrays and microchannel plate intensifiers presently employed by astronomical spacecraft, attention is given to such next-generation detectors as the Mosaicked Optical Self-scanned Array Imaging Camera, which consists of a photocathode deposited on the input side of a microchannel plate intensifier. The problems posed by the signal processing and data analysis requirements of the devices foreseen for the 21st century are noted.

  13. VERITAS: status c.2005

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weekes, T. C.; Atkins, R. W.; Badran, H. M.

    2006-07-11

    VERITAS (Very Energetic Radiation Imaging Telescope Array System), is one of a new generation of TeV gamma-ray observatories. The current status of its construction is described here. The first two telescopes and cameras have been completed and meet the design specifications; the full array of four telescopes could be operational by the end of 2006.

  14. Prototype AEGIS: A Pixel-Array Readout Circuit for Gamma-Ray Imaging.

    PubMed

    Barber, H Bradford; Augustine, F L; Furenlid, L; Ingram, C M; Grim, G P

    2005-07-31

    Semiconductor detector arrays made of CdTe/CdZnTe are expected to be the main components of future high-performance, clinical nuclear medicine imaging systems. Such systems will require small pixel-pitch and much larger numbers of pixels than are available in current semiconductor-detector cameras. We describe the motivation for developing a new readout integrated circuit, AEGIS, for use in hybrid semiconductor detector arrays, that may help spur the development of future cameras. A basic design for AEGIS is presented together with results of an HSPICE ™ simulation of the performance of its unit cell. AEGIS will have a shaper-amplifier unit cell and neighbor pixel readout. Other features include the use of a single input power line with other biases generated on-board, a control register that allows digital control of all thresholds and chip configurations and an output approach that is compatible with list-mode data acquisition. An 8×8 prototype version of AEGIS is currently under development; the full AEGIS will be a 64×64 array with 300 μm pitch.

  15. An array of virtual Frisch-grid CdZnTe detectors and a front-end application-specific integrated circuit for large-area position-sensitive gamma-ray cameras

    DOE PAGES

    Bolotnikov, A. E.; Ackley, K.; Camarda, G. S.; ...

    2015-07-28

    We developed a robust and low-cost array of virtual Frisch-grid CdZnTe (CZT) detectors coupled to a front-end readout ASIC for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6x6x15 mm 3 detectors grouped into 3x3 sub-arrays of 2x2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We present the results from testing two fully assembled modules and readoutmore » electronics. The further enhancement of the arrays’ performance and reduction of their cost are made possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.« less

  16. SMA observations of the W3(OH) complex: Dynamical differentiation between W3(H2O) and W3(OH)

    NASA Astrophysics Data System (ADS)

    Qin, Sheng-Li; Schilke, Peter; Wu, Jingwen; Liu, Tie; Wu, Yuefang; Sánchez-Monge, Álvaro; Liu, Ying

    2016-03-01

    We present Submillimeter Array observations of the HCN (3-2) and HCO+ (3-2) molecular lines towards the W3(H2O) and W3(OH) star-forming complexes. Infall and outflow motions in the W3(H2O) have been characterized by observing HCN and HCO+ transitions. High-velocity blue/red-shifted emission, tracing the outflow, show multiple knots, which might originate in episodic and precessing outflows. `Blue-peaked' line profiles indicate that gas is infalling on to the W3(H2O) dust core. The measured large mass accretion rate, 2.3 × 10-3 M⊙ yr-1, together with the small free-fall time-scale, 5 × 103 yr, suggest W3(H2O) is in an early evolutionary stage of the process of formation of high-mass stars. For the W3(OH), a two-layer model fit to the HCN and HCO+ spectral lines and Spizter/Infrared Array Camera (IRAC) images support that the W3(OH) H II region is expanding and interacting with the ambient gas, with the shocked neutral gas being expanding with an expansion time-scale of 6.4 × 103 yr. The observations suggest different kinematical time-scales and dynamical states for the W3(H2O) and W3(OH).

  17. Spatially controlled fabrication of a bright fluorescent nanodiamond-array with enhanced far-red Si-V luminescence.

    PubMed

    Singh, Sonal; Thomas, Vinoy; Martyshkin, Dmitry; Kozlovskaya, Veronika; Kharlampieva, Eugenia; Catledge, Shane A

    2014-01-31

    We demonstrate a novel approach to precisely pattern fluorescent nanodiamond-arrays with enhanced far-red intense photostable luminescence from silicon-vacancy (Si-V) defect centers. The precision-patterned pre-growth seeding of nanodiamonds is achieved by a scanning probe 'dip-pen' nanolithography technique using electrostatically driven transfer of nanodiamonds from 'inked' cantilevers to a UV-treated hydrophilic SiO2 substrate. The enhanced emission from nanodiamond dots in the far-red is achieved by incorporating Si-V defect centers in a subsequent chemical vapor deposition treatment. The development of a suitable nanodiamond ink and mechanism of ink transport, and the effect of humidity and dwell time on nanodiamond patterning are investigated. The precision patterning of as-printed (pre-CVD) arrays with dot diameter and dot height as small as 735 nm ± 27 nm and 61 nm ± 3 nm, respectively, and CVD-treated fluorescent ND-arrays with consistently patterned dots having diameter and height as small as 820 nm ± 20 nm and, 245 nm ± 23 nm, respectively, using 1 s dwell time and 30% RH is successfully achieved. We anticipate that the far-red intense photostable luminescence (~738 nm) observed from Si-V defect centers integrated in spatially arranged nanodiamonds could be beneficial for the development of next generation fluorescence-based devices and applications.

  18. Raspberry Pi camera with intervalometer used as crescograph

    NASA Astrophysics Data System (ADS)

    Albert, Stefan; Surducan, Vasile

    2017-12-01

    The intervalometer is an attachment or facility on a photo-camera that operates the shutter regularly at set intervals over a period. Professional cameras with built in intervalometers are expensive and quite difficult to find. The Canon CHDK open source operating system allows intervalometer implementation on Canon cameras only. However finding a Canon camera with near infra-red (NIR) photographic lens at affordable price is impossible. On experiments requiring several cameras (used to measure growth in plants - the crescographs, but also for coarse evaluation of the water content of leaves), the costs of the equipment are often over budget. Using two Raspberry Pi modules each equipped with a low cost NIR camera and a WIFI adapter (for downloading pictures stored on the SD card) and some freely available software, we have implemented two low budget intervalometer cameras. The shutting interval, the number of pictures to be taken, image resolution and some other parameters can be fully programmed. Cameras have been in use continuously for three months (July-October 2017) in a relevant environment (outside), proving the concept functionality.

  19. Cinematic camera emulation using two-dimensional color transforms

    NASA Astrophysics Data System (ADS)

    McElvain, Jon S.; Gish, Walter

    2015-02-01

    For cinematic and episodic productions, on-set look management is an important component of the creative process, and involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose due to its increased agility. Because the spectral response characteristics will be different between the two camera systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly chromatic content.

  20. Behavior of red tree voles (Arborimus longicaudus) based on continuous video monitoring of nests

    Treesearch

    Eric D. Forsman; James K. Swingle; Nicholas R. Hatch

    2009-01-01

    We used video cameras to observe the activity patterns and behavior of three female red tree voles (Arborimus longicaudus) and their young in arboreal nests in western Oregon. Observation periods at the three nests were 63, 103 and 148 days. All three voles were primarily nocturnal, but occasionally foraged for brief periods during the day when...

  1. Method and apparatus for calibrating a display using an array of cameras

    NASA Technical Reports Server (NTRS)

    Johnson, Michael J. (Inventor); Chen, Chung-Jen (Inventor); Chandrasekhar, Rajesh (Inventor)

    2001-01-01

    The present invention overcomes many of the disadvantages of the prior art by providing a display that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, the present invention provides one or more cameras to capture an image that is projected on a display screen. In one embodiment, the one or more cameras are placed on the same side of the screen as the projectors. In another embodiment, an array of cameras is provided on either or both sides of the screen for capturing a number of adjacent and/or overlapping capture images of the screen. In either of these embodiments, the resulting capture images are processed to identify any non-desirable characteristics including any visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and/or other visible artifacts.

  2. Nondestructive assessment of the severity of occlusal caries lesions with near-infrared imaging at 1310 nm.

    PubMed

    Lee, Chulsung; Lee, Dustin; Darling, Cynthia L; Fried, Daniel

    2010-01-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310 nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study is to determine whether the lesion contrast derived from NIR imaging in both transmission and reflectance can be used to estimate lesion severity. Two NIR imaging detector technologies are investigated: a new Ge-enhanced complementary metal-oxide-semiconductor (CMOS)-based NIR imaging camera, and an InGaAs focal plane array (FPA). Natural occlusal caries lesions are imaged with both cameras at 1310 nm, and the image contrast between sound and carious regions is calculated. After NIR imaging, teeth are sectioned and examined using polarized light microscopy (PLM) and transverse microradiography (TMR) to determine lesion severity. Lesions are then classified into four categories according to lesion severity. Lesion contrast increases significantly with lesion severity for both cameras (p<0.05). The Ge-enhanced CMOS camera equipped with the larger array and smaller pixels yields higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  3. Nondestructive assessment of the severity of occlusal caries lesions with near-infrared imaging at 1310 nm

    PubMed Central

    Lee, Chulsung; Lee, Dustin; Darling, Cynthia L.; Fried, Daniel

    2010-01-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310 nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study is to determine whether the lesion contrast derived from NIR imaging in both transmission and reflectance can be used to estimate lesion severity. Two NIR imaging detector technologies are investigated: a new Ge-enhanced complementary metal-oxide-semiconductor (CMOS)-based NIR imaging camera, and an InGaAs focal plane array (FPA). Natural occlusal caries lesions are imaged with both cameras at 1310 nm, and the image contrast between sound and carious regions is calculated. After NIR imaging, teeth are sectioned and examined using polarized light microscopy (PLM) and transverse microradiography (TMR) to determine lesion severity. Lesions are then classified into four categories according to lesion severity. Lesion contrast increases significantly with lesion severity for both cameras (p<0.05). The Ge-enhanced CMOS camera equipped with the larger array and smaller pixels yields higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity. PMID:20799842

  4. Nondestructive assessment of the severity of occlusal caries lesions with near-infrared imaging at 1310 nm

    NASA Astrophysics Data System (ADS)

    Lee, Chulsung; Lee, Dustin; Darling, Cynthia L.; Fried, Daniel

    2010-07-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310 nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study is to determine whether the lesion contrast derived from NIR imaging in both transmission and reflectance can be used to estimate lesion severity. Two NIR imaging detector technologies are investigated: a new Ge-enhanced complementary metal-oxide-semiconductor (CMOS)-based NIR imaging camera, and an InGaAs focal plane array (FPA). Natural occlusal caries lesions are imaged with both cameras at 1310 nm, and the image contrast between sound and carious regions is calculated. After NIR imaging, teeth are sectioned and examined using polarized light microscopy (PLM) and transverse microradiography (TMR) to determine lesion severity. Lesions are then classified into four categories according to lesion severity. Lesion contrast increases significantly with lesion severity for both cameras (p<0.05). The Ge-enhanced CMOS camera equipped with the larger array and smaller pixels yields higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  5. A hierarchical model for estimating density in camera-trap studies

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  6. The NOAO NEWFIRM Data Handling System

    NASA Astrophysics Data System (ADS)

    Zárate, N.; Fitzpatrick, M.

    2008-08-01

    The NOAO Extremely Wide-Field IR Mosaic (NEWFIRM) is a new 1-2.4 micron IR camera that is now being commissioned for the 4m Mayall telescope at Kitt Peak. The focal plane consists of a 2x2 mosaic of 2048x2048 arrays offerring a field-of-view of 27.6' on a side. The use of dual MONSOON array controllers permits very fast readout, a scripting interface allows for highly efficient observing modes. We describe the Data Handling System (DHS) for the NEWFIRM camera which is designed to meet the performance requirements of the instrument as well as the observing environment in which in operates. It is responsible for receiving the data stream from the detector and instrument software, rectifying the image geometry, presenting a real-time display of the image to the user, final assembly of a science-grade image with complete headers, as well as triggering automated pipeline and archival functions. The DHS uses an event-based messaging system to control multiple processes on a distributed network of machines. The asynchronous nature of this processing means the DHS operates independently from the camera readout and the design of the system is inherently scalable to larger focal planes that use a greater number of array controllers. Current status and future plans for the DHS are also discussed.

  7. Automatic exposure for panoramic systems in uncontrolled lighting conditions: a football stadium case study

    NASA Astrophysics Data System (ADS)

    Gaddam, Vamsidhar Reddy; Griwodz, Carsten; Halvorsen, Pâl.

    2014-02-01

    One of the most common ways of capturing wide eld-of-view scenes is by recording panoramic videos. Using an array of cameras with limited overlapping in the corresponding images, one can generate good panorama images. Using the panorama, several immersive display options can be explored. There is a two fold synchronization problem associated to such a system. One is the temporal synchronization, but this challenge can easily be handled by using a common triggering solution to control the shutters of the cameras. The other synchronization challenge is the automatic exposure synchronization which does not have a straight forward solution, especially in a wide area scenario where the light conditions are uncontrolled like in the case of an open, outdoor football stadium. In this paper, we present the challenges and approaches for creating a completely automatic real-time panoramic capture system with a particular focus on the camera settings. One of the main challenges in building such a system is that there is not one common area of the pitch that is visible to all the cameras that can be used for metering the light in order to nd appropriate camera parameters. One approach we tested is to use the green color of the eld grass. Such an approach provided us with acceptable results only in limited light conditions.A second approach was devised where the overlapping areas between adjacent cameras are exploited, thus creating pairs of perfectly matched video streams. However, there still existed some disparity between di erent pairs. We nally developed an approach where the time between two temporal frames is exploited to communicate the exposures among the cameras where we achieve a perfectly synchronized array. An analysis of the system and some experimental results are presented in this paper. In summary, a pilot-camera approach running in auto-exposure mode and then distributing the used exposure values to the other cameras seems to give best visual results.

  8. Omega Centauri Looks Radiant in Infrared

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Poster Version

    A cluster brimming with millions of stars glistens like an iridescent opal in this image from NASA's Spitzer Space Telescope. Called Omega Centauri, the sparkling orb of stars is like a miniature galaxy. It is the biggest and brightest of the 150 or so similar objects, called globular clusters, that orbit around the outside of our Milky Way galaxy. Stargazers at southern latitudes can spot the stellar gem with the naked eye in the constellation Centaurus.

    Globular clusters are some of the oldest objects in our universe. Their stars are over 12 billion years old, and, in most cases, formed all at once when the universe was just a toddler. Omega Centauri is unusual in that its stars are of different ages and possess varying levels of metals, or elements heavier than boron. Astronomers say this points to a different origin for Omega Centauri than other globular clusters: they think it might be the core of a dwarf galaxy that was ripped apart and absorbed by our Milky Way long ago.

    In this new view of Omega Centauri, Spitzer's infrared observations have been combined with visible-light data from the National Science Foundation's Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory in Chile. Visible-light data with a wavelength of .55 microns is colored blue, 3.6-micron infrared light captured by Spitzer's infrared array camera is colored green and 24-micron infrared light taken by Spitzer's multiband imaging photometer is colored red.

    Where green and red overlap, the color yellow appears. Thus, the yellow and red dots are stars revealed by Spitzer. These stars, called red giants, are more evolved, larger and dustier. The stars that appear blue were spotted in both visible and 3.6-micron-, or near-, infrared light. They are less evolved, like our own sun. Some of the red spots in the picture are distant galaxies beyond our own.

    Spitzer found very little dust around any but the most luminous, coolest red giants, implying that the dimmer red giants do not form significant amounts of dust. The space between the stars in Omega Centauri was also found to lack dust, which means the dust is rapidly destroyed or leaves the cluster.

  9. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.

  10. Performance measurement of commercial electronic still picture cameras

    NASA Astrophysics Data System (ADS)

    Hsu, Wei-Feng; Tseng, Shinn-Yih; Chiang, Hwang-Cheng; Cheng, Jui-His; Liu, Yuan-Te

    1998-06-01

    Commercial electronic still picture cameras need a low-cost, systematic method for evaluating the performance. In this paper, we present a measurement method to evaluating the dynamic range and sensitivity by constructing the opto- electronic conversion function (OECF), the fixed pattern noise by the peak S/N ratio (PSNR) and the image shading function (ISF), and the spatial resolution by the modulation transfer function (MTF). The evaluation results of individual color components and the luminance signal from a PC camera using SONY interlaced CCD array as the image sensor are then presented.

  11. Long-Wavelength 640 x 486 GaAs/AlGaAs Quantum Well Infrared Photodetector Snap-Shot Camera

    NASA Technical Reports Server (NTRS)

    Gunapala, Sarath D.; Bandara, Sumith V.; Liu, John K.; Hong, Winn; Sundaram, Mani; Maker, Paul D.; Muller, Richard E.; Shott, Craig A.; Carralejo, Ronald

    1998-01-01

    A 9-micrometer cutoff 640 x 486 snap-shot quantum well infrared photodetector (QWIP) camera has been demonstrated. The performance of this QWIP camera is reported including indoor and outdoor imaging. The noise equivalent differential temperature (NE.deltaT) of 36 mK has been achieved at 300 K background with f/2 optics. This is in good agreement with expected focal plane array sensitivity due to the practical limitations on charge handling capacity of the multiplexer, read noise, bias voltage, and operating temperature.

  12. Manned observations technology development, FY 1992 report

    NASA Technical Reports Server (NTRS)

    Israel, Steven

    1992-01-01

    This project evaluated the suitability of the NASA/JSC developed electronic still camera (ESC) digital image data for Earth observations from the Space Shuttle, as a first step to aid planning for Space Station Freedom. Specifically, image resolution achieved from the Space Shuttle using the current ESC system, which is configured with a Loral 15 mm x 15 mm (1024 x 1024 pixel array) CCD chip on the focal plane of a Nikon F4 camera, was compared to that of current handheld 70 mm Hasselblad 500 EL/M film cameras.

  13. In vivo imaging of tissue scattering parameter and cerebral hemodynamics in rat brain with a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Mustari, Afrina; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu; Kokubo, Yasuaki

    2017-02-01

    We propose a rapid imaging method to monitor the spatial distribution of total hemoglobin concentration (CHbT), the tissue oxygen saturation, and the scattering power b in the expression of μs'=aλ-b as the scattering parameters in cerebral cortex using a digital red-green-blue camera. In the method, the RGB-values are converted into the tristimulus values in CIEXYZ color space which is compatible with the common RGB working spaces. Monte Carlo simulation (MCS) for light transport in tissue is used to specify a relation among the tristimulus XYZ-values and the concentration of oxygenated hemoglobin, that of deoxygenated hemoglobin, and the scattering power b. In the present study, we performed sequential recordings of RGB images of in vivo exposed rat brain during the cortical spreading depolarization evoked by the topical application of KCl. Changes in the total hemoglobin concentration and the tissue oxygen saturation imply the temporary change in cerebral blood flow during CSD. Decrease in the scattering power b was observed before the profound increase in the total hemoglobin concentration, which is indicative of the reversible morphological changes in brain tissue during CSD. The results in this study indicate potential of the method to evaluate the pathophysiological conditions in brain tissue with a digital red-green-blue camera.

  14. High resolution imaging of the Venus night side using a Rockwell 128x128 HgCdTe array

    NASA Technical Reports Server (NTRS)

    Hodapp, K.-W.; Sinton, W.; Ragent, B.; Allen, D.

    1989-01-01

    The University of Hawaii operates an infrared camera with a 128x128 HgCdTe detector array on loan from JPL's High Resolution Imaging Spectrometer (HIRIS) project. The characteristics of this camera system are discussed. The infrared camera was used to obtain images of the night side of Venus prior to and after inferior conjunction in 1988. The images confirm Allen and Crawford's (1984) discovery of bright features on the dark hemisphere of Venus visible in the H and K bands. Our images of these features are the best obtained to date. Researchers derive a pseudo rotation period of 6.5 days for these features and 1.74 microns brightness temperatures between 425 K and 480 K. The features are produced by nonuniform absorption in the middle cloud layer (47 to 57 Km altitude) of thermal radiation from the lower Venus atmosphere (20 to 30 Km altitude). A more detailed analysis of the data is in progress.

  15. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  16. Characterization and commissioning of the SST-1M camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Aguilar, J. A.; Bilnik, W.; Błocki, J.; Bogacz, L.; Borkowski, J.; Bulik, T.; Cadoux, F.; Christov, A.; Curyło, M.; della Volpe, D.; Dyrda, M.; Favre, Y.; Frankowski, A.; Grudnik, Ł.; Grudzińska, M.; Heller, M.; Idźkowski, B.; Jamrozy, M.; Janiak, M.; Kasperek, J.; Lalik, K.; Lyard, E.; Mach, E.; Mandat, D.; Marszałek, A.; Medina Miranda, L. D.; Michałowski, J.; Moderski, R.; Montaruli, T.; Neronov, A.; Niemiec, J.; Ostrowski, M.; Paśko, P.; Pech, M.; Porcelli, A.; Prandini, E.; Rajda, P.; Rameez, M.; Schioppa, E., Jr.; Schovanek, P.; Seweryn, K.; Skowron, K.; Sliusar, V.; Sowiński, M.; Stawarz, Ł.; Stodulska, M.; Stodulski, M.; Toscano, S.; Troyano Pujadas, I.; Walter, R.; Wiȩcek, M.; Zagdański, A.; Ziȩtara, K.; Żychowski, P.

    2017-02-01

    The Cherenkov Telescope Array (CTA), the next generation very high energy gamma-rays observatory, will consist of three types of telescopes: large (LST), medium (MST) and small (SST) size telescopes. The SSTs are dedicated to the observation of gamma-rays with energy between a few TeV and a few hundreds of TeV. The SST array is expected to have 70 telescopes of different designs. The single-mirror small size telescope (SST-1 M) is one of the proposed telescope designs under consideration for the SST array. It will be equipped with a 4 m diameter segmented mirror dish and with an innovative camera based on silicon photomultipliers (SiPMs). The challenge is not only to build a telescope with exceptional performance but to do it foreseeing its mass production. To address both of these challenges, the camera adopts innovative solutions both for the optical system and readout. The Photo-Detection Plane (PDP) of the camera is composed of 1296 pixels, each made of a hollow, hexagonal light guide coupled to a hexagonal SiPM designed by the University of Geneva and Hamamatsu. As no commercial ASIC would satisfy the CTA requirements when coupled to such a large sensor, dedicated preamplifier electronics have been designed. The readout electronics also use an innovative approach in gamma-ray astronomy by adopting a fully digital approach. All signals coming from the PDP are digitized in a 250 MHz Fast ADC and stored in ring buffers waiting for a trigger decision to send them to the pre-processing server where calibration and higher level triggers will decide whether the data are stored. The latest generation of FPGAs is used to achieve high data rates and also to exploit all the flexibility of the system. As an example each event can be flagged according to its trigger pattern. All of these features have been demonstrated in laboratory measurements on realistic elements and the results of these measurements will be presented in this contribution.

  17. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  18. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  19. Towards real-time non contact spatial resolved oxygenation monitoring using a multi spectral filter array camera in various light conditions

    NASA Astrophysics Data System (ADS)

    Bauer, Jacob R.; van Beekum, Karlijn; Klaessens, John; Noordmans, Herke Jan; Boer, Christa; Hardeberg, Jon Y.; Verdaasdonk, Rudolf M.

    2018-02-01

    Non contact spatial resolved oxygenation measurements remain an open challenge in the biomedical field and non contact patient monitoring. Although point measurements are the clinical standard till this day, regional differences in the oxygenation will improve the quality and safety of care. Recent developments in spectral imaging resulted in spectral filter array cameras (SFA). These provide the means to acquire spatial spectral videos in real-time and allow a spatial approach to spectroscopy. In this study, the performance of a 25 channel near infrared SFA camera was studied to obtain spatial oxygenation maps of hands during an occlusion of the left upper arm in 7 healthy volunteers. For comparison a clinical oxygenation monitoring system, INVOS, was used as a reference. In case of the NIRS SFA camera, oxygenation curves were derived from 2-3 wavelength bands with a custom made fast analysis software using a basic algorithm. Dynamic oxygenation changes were determined with the NIR SFA camera and INVOS system at different regional locations of the occluded versus non-occluded hands and showed to be in good agreement. To increase the signal to noise ratio, algorithm and image acquisition were optimised. The measurement were robust to different illumination conditions with NIR light sources. This study shows that imaging of relative oxygenation changes over larger body areas is potentially possible in real time.

  20. New Galaxy-hunting Sky Camera Sees Redder Better | Berkeley Lab

    Science.gov Websites

    ) is now one of the best cameras on the planet for studying outer space at red wavelengths that are too . Mosaic-3's primary mission is to carry out a survey of roughly one-eighth of the sky (5,500 square survey is just one layer in the galaxy survey that is locating targets for DESI. Data from this survey

  1. Two Titans

    NASA Image and Video Library

    2017-08-11

    These two views of Saturn's moon Titan exemplify how NASA's Cassini spacecraft has revealed the surface of this fascinating world. Cassini carried several instruments to pierce the veil of hydrocarbon haze that enshrouds Titan. The mission's imaging cameras also have several spectral filters sensitive to specific wavelengths of infrared light that are able to make it through the haze to the surface and back into space. These "spectral windows" have enable the imaging cameras to map nearly the entire surface of Titan. In addition to Titan's surface, images from both the imaging cameras and VIMS have provided windows into the moon's ever-changing atmosphere, chronicling the appearance and movement of hazes and clouds over the years. A large, bright and feathery band of summer clouds can be seen arcing across high northern latitudes in the view at right. These views were obtained with the Cassini spacecraft narrow-angle camera on March 21, 2017. Images taken using red, green and blue spectral filters were combined to create the natural-color view at left. The false-color view at right was made by substituting an infrared image (centered at 938 nanometers) for the red color channel. The views were acquired at a distance of approximately 613,000 miles (986,000 kilometers) from Titan. Image scale is about 4 miles (6 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21624

  2. Two-dimensional photon-counting detector arrays based on microchannel array plates

    NASA Technical Reports Server (NTRS)

    Timothy, J. G.; Bybee, R. L.

    1975-01-01

    The production of simple and rugged photon-counting detector arrays has been made possible by recent improvements in the performance of the microchannel array plate (MCP) and by the parallel development of compatible electronic readout systems. The construction of proximity-focused MCP arrays of novel design in which photometric information from (n x m) picture elements is read out with a total of (n + m) amplifier and discriminator circuits is described. Results obtained with a breadboard (32 x 32)-element array employing 64 charge-sensitive amplifiers are presented, and the application of systems of this type in spectrometers and cameras for use with ground-based telescopes and on orbiting spacecraft discussed.

  3. Concept of dual-resolution light field imaging using an organic photoelectric conversion film for high-resolution light field photography.

    PubMed

    Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki

    2017-11-01

    Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.

  4. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  5. 15 CFR 742.4 - National security.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or... Albania, Australia, Austria, Belgium, Bulgaria, Canada, Croatia, Cyprus, Czech Republic, Denmark, Estonia....b.4.b that have a focal plane array with 111,000 or fewer elements and a frame rate of 60 Hz or less...

  6. Beam characterization by wavefront sensor

    DOEpatents

    Neal, Daniel R.; Alford, W. J.; Gruetzner, James K.

    1999-01-01

    An apparatus and method for characterizing an energy beam (such as a laser) with a two-dimensional wavefront sensor, such as a Shack-Hartmann lenslet array. The sensor measures wavefront slope and irradiance of the beam at a single point on the beam and calculates a space-beamwidth product. A detector array such as a charge coupled device camera is preferably employed.

  7. Clouds over Tharsis

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Color composite of condensate clouds over Tharsis made from red and blue images with a synthesized green channel. Mars Orbiter Camera wide angle frames from Orbit 48.

    Figure caption from Science Magazine

  8. Experimental and numerical study of plastic shear instability under high-speed loading conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokovikov, Mikhail, E-mail: sokovikov@icmm.ru, E-mail: naimark@icmm.ru; Chudinov, Vasiliy, E-mail: sokovikov@icmm.ru, E-mail: naimark@icmm.ru; Bilalov, Dmitry, E-mail: sokovikov@icmm.ru, E-mail: naimark@icmm.ru

    2014-11-14

    The behavior of specimens dynamically loaded during the split Hopkinson (Kolsky) bar tests in a regime close to simple shear conditions was studied. The lateral surface of the specimens was investigated in a real-time mode with the aid of a high-speed infra-red camera CEDIP Silver 450M. The temperature field distribution obtained at different time made it possible to trace the evolution of plastic strain localization. The process of target perforation involving plug formation and ejection was examined using a high-speed infra-red camera and a VISAR velocity measurement system. The microstructure of tested specimens was analyzed using an optical interferometer-profilometer andmore » a scanning electron microscope. The development of plastic shear instability regions has been simulated numerically.« less

  9. [Nitrogen status diagnosis of rice by using a digital camera].

    PubMed

    Jia, Liang-Liang; Fan, Ming-Sheng; Zhang, Fu-Suo; Chen, Xin-Ping; Lü, Shi-Hua; Sun, Yan-Ming

    2009-08-01

    In the present research, a field experiment with different N application rate was conducted to study the possibility of using visible band color analysis methods to monitor the N status of rice canopy. The Correlations of visible spectrum band color intensity between rice canopy image acquired from a digital camera and conventional nitrogen status diagnosis parameters of leaf SPAD chlorophyll meter readings, total N content, upland biomass and N uptake were studied. The results showed that the red color intensity (R), green color intensity (G) and normalized redness intensity (NRI) have significant inverse linear correlations with the conventional N diagnosis parameters of SPAD readings, total N content, upland biomass and total N uptake. The correlation coefficient values (r) were from -0.561 to -0.714 for red band (R), from -0.452 to -0.505 for green band (G), and from -0.541 to 0.817 for normalized redness intensity (NRI). But the normalized greenness intensity (NGI) showed a significant positive correlation with conventional N parameters and the correlation coefficient values (r) were from 0.505 to 0.559. Compared with SPAD readings, the normalized redness intensity (NRI), with a high r value of 0.541-0.780 with conventional N parameters, could better express the N status of rice. The digital image color analysis method showed the potential of being used in rice N status diagnosis in the future.

  10. Differentiating defects in red oak lumber by discriminant analysis using color, shape, and density

    Treesearch

    B. H. Bond; D. Earl Kline; Philip A. Araman

    2002-01-01

    Defect color, shape, and density measures aid in the differentiation of knots, bark pockets, stain/mineral streak, and clearwood in red oak, (Quercus rubra). Various color, shape, and density measures were extracted for defects present in color and X-ray images captured using a color line scan camera and an X-ray line scan detector. Analysis of variance was used to...

  11. Lights, Camera, Spectroscope! The Basics of Spectroscopy Disclosed Using a Computer Screen

    ERIC Educational Resources Information Center

    Garrido-Gonza´lez, Jose´ J.; Trillo-Alcala´, María; Sa´nchez-Arroyo, Antonio J.

    2018-01-01

    The generation of secondary colors in digital devices by means of the additive red, green, and blue color model (RGB) can be a valuable way to introduce students to the basics of spectroscopy. This work has been focused on the spectral separation of secondary colors of light emitted by a computer screen into red, green, and blue bands, and how the…

  12. Comparison of red-cockaded woodpecker (Picoides borealis) nestling diet in old-growth and old-field longleaf pine (Pinus palustris) habitats

    Treesearch

    James L. Hanula; R. Todd Engstrom

    2000-01-01

    Automatic cameras were used to record adult red-cockaded woodpecker (Picoides borealis) nest visits with food for nestlings. Diet of nestlings on or near an old-growth longleaf pine (Pinus palustris) remnant in southern Georgia was compared to that in longleaf pine stands established on old farm fields in western South Carolina....

  13. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  14. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  15. Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery

    NASA Astrophysics Data System (ADS)

    Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.

    2017-12-01

    Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being organized to collect distributed cloud data sets suitable for MODIS-CERES cloud radiation science and solar forecasting algorithm development. A low-cost and robust sensor design suitable for large scale fabrication and long term deployment has been developed during the project prototyping phase.

  16. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  17. Star Formation as Seen by the Infrared Array Camera on Spitzer

    NASA Technical Reports Server (NTRS)

    Smith, Howard A.; Allen, L.; Megeath, T.; Barmby, P.; Calvet, N.; Fazio, G.; Hartmann, L.; Myers, P.; Marengo, M.; Gutermuth, R.

    2004-01-01

    The Infrared Array Camera (IRAC) onboard Spitzer has imaged regions of star formation (SF) in its four IR bands with spatial resolutions of approximately 2"/pixel. IRAC is sensitive enough to detect very faint, embedded young stars at levels of tens of Jy, and IRAC photometry can categorize their stages of development: from young protostars with infalling envelopes (Class 0/1) to stars whose infrared excesses derive from accreting circumstellar disks (Class 11) to evolved stars dominated by photospheric emission. The IRAC images also clearly reveal and help diagnose associated regions of shocked and/or PDR emission in the clouds; we find existing models provide a good start at explaining the continuum of the SF regions IRAC observes.

  18. Etalon Array Reconstructive Spectrometry

    NASA Astrophysics Data System (ADS)

    Huang, Eric; Ma, Qian; Liu, Zhaowei

    2017-01-01

    Compact spectrometers are crucial in areas where size and weight may need to be minimized. These types of spectrometers often contain no moving parts, which makes for an instrument that can be highly durable. With the recent proliferation in low-cost and high-resolution cameras, camera-based spectrometry methods have the potential to make portable spectrometers small, ubiquitous, and cheap. Here, we demonstrate a novel method for compact spectrometry that uses an array of etalons to perform spectral encoding, and uses a reconstruction algorithm to recover the incident spectrum. This spectrometer has the unique capability for both high resolution and a large working bandwidth without sacrificing sensitivity, and we anticipate that its simplicity makes it an excellent candidate whenever a compact, robust, and flexible spectrometry solution is needed.

  19. A higher-speed compressive sensing camera through multi-diode design

    NASA Astrophysics Data System (ADS)

    Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore

    2013-05-01

    Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.

  20. PMMW Camera TRP. Phase 1

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Passive millimeter wave (PMMW) sensors have the ability to see through fog, clouds, dust and sandstorms and thus have the potential to support all-weather operations, both military and commercial. Many of the applications, such as military transport or commercial aircraft landing, are technologically stressing in that they require imaging of a scene with a large field of view in real time and with high spatial resolution. The development of a low cost PMMW focal plane array camera is essential to obtain real-time video images to fulfill the above needs. The overall objective of this multi-year project (Phase 1) was to develop and demonstrate the capabilities of a W-band PMMW camera with a microwave/millimeter wave monolithic integrated circuit (MMIC) focal plane array (FPA) that can be manufactured at low cost for both military and commercial applications. This overall objective was met in July 1997 when the first video images from the camera were generated of an outdoor scene. In addition, our consortium partner McDonnell Douglas was to develop a real-time passive millimeter wave flight simulator to permit pilot evaluation of a PMMW-equipped aircraft in a landing scenario. A working version of this simulator was completed. This work was carried out under the DARPA-funded PMMW Camera Technology Reinvestment Project (TRP), also known as the PMMW Camera DARPA Joint Dual-Use Project. In this final report for the Phase 1 activities, a year by year description of what the specific objectives were, the approaches taken, and the progress made is presented, followed by a description of the validation and imaging test results obtained in 1997.

  1. 3D imaging and wavefront sensing with a plenoptic objective

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  2. Uncooled Terahertz real-time imaging 2D arrays developed at LETI: present status and perspectives

    NASA Astrophysics Data System (ADS)

    Simoens, François; Meilhan, Jérôme; Dussopt, Laurent; Nicolas, Jean-Alain; Monnier, Nicolas; Sicard, Gilles; Siligaris, Alexandre; Hiberty, Bruno

    2017-05-01

    As for other imaging sensor markets, whatever is the technology, the commercial spread of terahertz (THz) cameras has to fulfil simultaneously the criteria of high sensitivity and low cost and SWAP (size, weight and power). Monolithic silicon-based 2D sensors integrated in uncooled THz real-time cameras are good candidates to meet these requirements. Over the past decade, LETI has been studying and developing such arrays with two complimentary technological approaches, i.e. antenna-coupled silicon bolometers and CMOS Field Effect Transistors (FET), both being compatible to standard silicon microelectronics processes. LETI has leveraged its know-how in thermal infrared bolometer sensors in developing a proprietary architecture for THz sensing. High technological maturity has been achieved as illustrated by the demonstration of fast scanning of large field of view and the recent birth of a commercial camera. In the FET-based THz field, recent works have been focused on innovative CMOS read-out-integrated circuit designs. The studied architectures take advantage of the large pixel pitch to enhance the flexibility and the sensitivity: an embedded in-pixel configurable signal processing chain dramatically reduces the noise. Video sequences at 100 frames per second using our 31x31 pixels 2D Focal Plane Arrays (FPA) have been achieved. The authors describe the present status of these developments and perspectives of performance evolutions are discussed. Several experimental imaging tests are also presented in order to illustrate the capabilities of these arrays to address industrial applications such as non-destructive testing (NDT), security or quality control of food.

  3. Dual-mode photosensitive arrays based on the integration of liquid crystal microlenses and CMOS sensors for obtaining the intensity images and wavefronts of objects.

    PubMed

    Tong, Qing; Lei, Yu; Xin, Zhaowei; Zhang, Xinyu; Sang, Hongshi; Xie, Changsheng

    2016-02-08

    In this paper, we present a kind of dual-mode photosensitive arrays (DMPAs) constructed by hybrid integration a liquid crystal microlens array (LCMLA) driven electrically and a CMOS sensor array, which can be used to measure both the conventional intensity images and corresponding wavefronts of objects. We utilize liquid crystal materials to shape the microlens array with the electrically tunable focal length. Through switching the voltage signal on and off, the wavefronts and the intensity images can be acquired through the DMPAs, sequentially. We use white light to obtain the object's wavefronts for avoiding losing important wavefront information. We separate the white light wavefronts with a large number of spectral components and then experimentally compare them with single spectral wavefronts of typical red, green and blue lasers, respectively. Then we mix the red, green and blue wavefronts to a composite wavefront containing more optical information of the object.

  4. Using the OOI Cabled Array HD Camera to Explore Geophysical and Oceanographic Problems at Axial Seamount

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Knuth, F.; Marburg, A.

    2016-12-01

    A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.

  5. On the Temporal Evolution of Red Sprites, Runaway Theory Versus Data

    NASA Technical Reports Server (NTRS)

    Yukhimuk, V.; Roussel-Dupre, R. A.; Symbalisty, E. M. D.

    1999-01-01

    The results of numerical simulations of red sprite discharges, namely the temporal evolutions of optical emissions, are presented and compared with observations. The simulations are done using the recently recalculated runaway avalanche rates. The temporal evolution of these simulations is in good agreement with ground-based photometer and CCD TV camera observations of red sprites. Our model naturally explains the "hairline" of red sprites as a boundary between the region where the intensity of optical emissions associated with runaway breakdown has a maximum and the region where the intensity of optical emissions caused by conventional breakdown and ambient electron heating has a maximum. We also present for the first time simulations of red sprites with a daytime conductivity profile.

  6. Combinatorial fabrication and screening of organic light-emitting device arrays

    NASA Astrophysics Data System (ADS)

    Shinar, Joseph; Shinar, Ruth; Zhou, Zhaoqun

    2007-11-01

    The combinatorial fabrication and screening of 2-dimensional (2-d) small molecular UV-violet organic light-emitting device (OLED) arrays, 1-d blue-to-red arrays, 1-d intense white OLED libraries, 1-d arrays to study Förster energy transfer in guest-host OLEDs, and 2-d arrays to study exciplex emission from OLEDs is described. The results demonstrate the power of combinatorial approaches for screening OLED materials and configurations, and for studying their basic properties.

  7. STS-109 Flight Day 3 Highlights

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This footage from the third day of the STS-109 mission to service the Hubble Space Telescope (HST) begins with the grappling of the HST by the robotic arm of the Columbia Orbiter, operated by Mission Specialist Nancy Currie. During the grappling, numerous angles deliver close-up images of the telescope which appears to be in good shape despite many years in orbit around the Earth. Following the positioning of the HST on its berthing platform in the Shuttle bay, the robotic arm is used to perform an external survey of the telescope. Some cursory details are given about different equipment which will be installed on the HST including a replacement cooling system for the Near Infrared Camera Multi-Object Spectrometer (NICMOS) and the Advanced Camera for Surveys. Following the survey, there is footage of the retraction of both of the telescope's two flexible solar arrays, which was successful. These arrays will be replaced by rigid solar arrays with decreased surface area and increased performance.

  8. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  9. Clouds over Tharsis

    NASA Image and Video Library

    1998-03-13

    Color composite of condensate clouds over Tharsis made from red and blue images with a synthesized green channel. Mars Orbiter Camera wide angle frames from Orbit 48. http://photojournal.jpl.nasa.gov/catalog/PIA00812

  10. Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask

    NASA Astrophysics Data System (ADS)

    Morel, Sébastien

    2004-09-01

    A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.

  11. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  12. Beam characterization by wavefront sensor

    DOEpatents

    Neal, D.R.; Alford, W.J.; Gruetzner, J.K.

    1999-08-10

    An apparatus and method are disclosed for characterizing an energy beam (such as a laser) with a two-dimensional wavefront sensor, such as a Shack-Hartmann lenslet array. The sensor measures wavefront slope and irradiance of the beam at a single point on the beam and calculates a space-beamwidth product. A detector array such as a charge coupled device camera is preferably employed. 21 figs.

  13. High Information Capacity Quantum Imaging

    DTIC Science & Technology

    2014-09-19

    single-pixel camera [41, 75]. An object is imaged onto a Digital Micromirror device ( DMD ), a 2D binary array of individually-addressable mirrors that...reflect light either to a single detector or a dump. Rows of the sensing matrix A consist of random, binary patterns placed sequentially on the DMD ...The single-pixel camera concept naturally adapts to imaging correlations by adding a second detector. Consider placing separate DMDs in the near-field

  14. Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras

    PubMed Central

    Wolf, Alejandro; Pezoa, Jorge E.; Figueroa, Miguel

    2016-01-01

    Images rendered by uncooled microbolometer-based infrared (IR) cameras are severely degraded by the spatial non-uniformity (NU) noise. The NU noise imposes a fixed-pattern over the true images, and the intensity of the pattern changes with time due to the temperature instability of such cameras. In this paper, we present a novel model and a compensation algorithm for the spatial NU noise and its temperature-dependent variations. The model separates the NU noise into two components: a constant term, which corresponds to a set of NU parameters determining the spatial structure of the noise, and a dynamic term, which scales linearly with the fluctuations of the temperature surrounding the array of microbolometers. We use a black-body radiator and samples of the temperature surrounding the IR array to offline characterize both the constant and the temperature-dependent NU noise parameters. Next, the temperature-dependent variations are estimated online using both a spatially uniform Hammerstein-Wiener estimator and a pixelwise least mean squares (LMS) estimator. We compensate for the NU noise in IR images from two long-wave IR cameras. Results show an excellent NU correction performance and a root mean square error of less than 0.25 ∘C, when the array’s temperature varies by approximately 15 ∘C. PMID:27447637

  15. Volume phase holographic gratings for the Subaru Prime Focus Spectrograph: performance measurements of the prototype grating set

    NASA Astrophysics Data System (ADS)

    Barkhouser, Robert H.; Arns, James; Gunn, James E.

    2014-08-01

    The Prime Focus Spectrograph (PFS) is a major instrument under development for the 8.2 m Subaru telescope on Mauna Kea. Four identical, fixed spectrograph modules are located in a room above one Nasmyth focus. A 55 m fiber optic cable feeds light into the spectrographs from a robotic fiber positioner mounted at the telescope prime focus, behind the wide field corrector developed for Hyper Suprime-Cam. The positioner contains 2400 fibers and covers a 1.3 degree hexagonal field of view. Each spectrograph module will be capable of simultaneously acquiring 600 spectra. The spectrograph optical design consists of a Schmidt collimator, two dichroic beamsplitters to separate the light into three channels, and for each channel a volume phase holographic (VPH) grating and a dual- corrector, modified Schmidt reimaging camera. This design provides a 275 mm collimated beam diameter, wide simultaneous wavelength coverage from 380 nm to 1.26 µm, and good imaging performance at the fast f/1.1 focal ratio required from the cameras to avoid oversampling the fibers. The three channels are designated as the blue, red, and near-infrared (NIR), and cover the bandpasses 380-650 nm (blue), 630-970 nm (red), and 0.94-1.26 µm (NIR). A mosaic of two Hamamatsu 2k×4k, 15 µm pixel CCDs records the spectra in the blue and red channels, while the NIR channel employs a 4k×4k, substrate-removed HAWAII-4RG array from Teledyne, with 15 µm pixels and a 1.7 µm wavelength cutoff. VPH gratings have become the dispersing element of choice for moderate-resolution astronomical spectro- graphs due their potential for very high diffraction efficiency, low scattered light, and the more compact instru- ment designs offered by transmissive dispersers. High quality VPH gratings are now routinely being produced in the sizes required for instruments on large telescopes. These factors made VPH gratings an obvious choice for PFS. In order to reduce risk to the project, as well as fully exploit the performance potential of this technology, a set of three prototype VPH gratings (one each of the blue, red, and NIR designs) was ordered and has been recently delivered. The goal for these prototype units, but not a requirement, was to meet the specifications for the final gratings in order to serve as spares and also as early demonstration and integration articles. In this paper we present the design and specifications for the PFS gratings, the plan and setups used for testing both the prototype and final gratings, and results from recent optical testing of the prototype grating set.

  16. High-angular-resolution NIR astronomy with large arrays (SHARP I and SHARP II)

    NASA Astrophysics Data System (ADS)

    Hofmann, Reiner; Brandl, Bernhard; Eckart, Andreas; Eisenhauer, Frank; Tacconi-Garman, Lowell E.

    1995-06-01

    SHARP I and SHARP II are near infrared cameras for high-angular-resolution imaging. Both cameras are built around a 256 X 256 pixel NICMOS 3 HgCdTe array from Rockwell which is sensitive in the 1 - 2.5 micrometers range. With a 0.05'/pixel scale, they can produce diffraction limited K-band images at 4-m-class telescopes. For a 256 X 256 array, this pixel scale results in a field of view of 12.8' X 12.8' which is well suited for the observation of galactic and extragalactic near-infrared sources. Photometric and low resolution spectroscopic capabilities are added by photometric band filters (J, H, K), narrow band filters ((lambda) /(Delta) (lambda) approximately equals 100) for selected spectral lines, and a CVF ((lambda) /(Delta) (lambda) approximately equals 70). A cold shutter permits short exposure times down to about 10 ms. The data acquisition electronics permanently accepts the maximum frame rate of 8 Hz which is defined by the detector time constants (data rate 1 Mbyte/s). SHARP I has been especially designed for speckle observations at ESO's 3.5 m New Technology Telescope and is in operation since 1991. SHARP II is used at ESO's 3.6 m telescope together with the adaptive optics system COME-ON + since 1993. A new version of SHARP II is presently under test, which incorporates exchangeable camera optics for observations with scales of 0.035, 0.05, and 0.1'/pixel. The first scale extends diffraction limited observations down to the J-band, while the last one provides a larger field of view. To demonstrate the power of the cameras, images of the galactic center obtained with SHARP I, and images of the R136 region in 30 Doradus observed with SHARP II are presented.

  17. Extended spectrum SWIR camera with user-accessible Dewar

    NASA Astrophysics Data System (ADS)

    Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva

    2017-02-01

    Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.

  18. Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter.

    PubMed

    Liang, Kun; Yang, Cailan; Peng, Li; Zhou, Bo

    2017-02-01

    In uncooled long-wave IR camera systems, the temperature of a focal plane array (FPA) is variable along with the environmental temperature as well as the operating time. The spatial nonuniformity of the FPA, which is partly affected by the FPA temperature, obviously changes as well, resulting in reduced image quality. This study presents a real-time nonuniformity correction algorithm based on FPA temperature to compensate for nonuniformity caused by FPA temperature fluctuation. First, gain coefficients are calculated using a two-point correction technique. Then offset parameters at different FPA temperatures are obtained and stored in tables. When the camera operates, the offset tables are called to update the current offset parameters via a temperature-dependent interpolation. Finally, the gain coefficients and offset parameters are used to correct the output of the IR camera in real time. The proposed algorithm is evaluated and compared with two representative shutterless algorithms [minimizing the sum of the squares of errors algorithm (MSSE), template-based solution algorithm (TBS)] using IR images captured by a 384×288 pixel uncooled IR camera with a 17 μm pitch. Experimental results show that this method can quickly trace the response drift of the detector units when the FPA temperature changes. The quality of the proposed algorithm is as good as MSSE, while the processing time is as short as TBS, which means the proposed algorithm is good for real-time control and at the same time has a high correction effect.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vican, Laura; Zuckerman, B.; Schneider, Adam

    We present results from two Herschel observing programs using the Photodetector Array Camera and Spectrometer. During three separate campaigns, we obtained Herschel data for 24 stars at 70, 100, and 160 μ m. We chose stars that were already known or suspected to have circumstellar dust based on excess infrared (IR) emission previously measured with the InfraRed Astronomical Satellite ( IRAS ) or Spitzer and used Herschel to examine long-wavelength properties of the dust. Fifteen stars were found to be uncontaminated by background sources and possess IR emission most likely due to a circumstellar debris disk. We analyzed the propertiesmore » of these debris disks to better understand the physical mechanisms responsible for dust production and removal. Seven targets were spatially resolved in the Herschel images. Based on fits to their spectral energy distributions, nine disks appear to have two temperature components. Of these nine, in three cases, the warmer dust component is likely the result of a transient process rather than a steady-state collisional cascade. The dust belts at four stars are likely stirred by an unseen planet and merit further investigation.« less

  20. Graphical user interface for a dual-module EMCCD x-ray detector array

    NASA Astrophysics Data System (ADS)

    Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen

    2011-03-01

    A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000x to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2kx1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.

  1. Global Composite

    Atmospheric Science Data Center

    2013-04-19

    article title:  MISR Global Images See the Light of Day     View Larger Image ... camera and combines data from the red, green and blue spectral bands to create a natural color image. The central view combines ...

  2. ASPIRE - Airborne Spectro-Polarization InfraRed Experiment

    NASA Astrophysics Data System (ADS)

    DeLuca, E.; Cheimets, P.; Golub, L.; Madsen, C. A.; Marquez, V.; Bryans, P.; Judge, P. G.; Lussier, L.; McIntosh, S. W.; Tomczyk, S.

    2017-12-01

    Direct measurements of coronal magnetic fields are critical for taking the next step in active region and solar wind modeling and for building the next generation of physics-based space-weather models. We are proposing a new airborne instrument to make these key observations. Building on the successful Airborne InfraRed Spectrograph (AIR-Spec) experiment for the 2017 eclipse, we will design and build a spectro-polarimeter to measure coronal magnetic field during the 2019 South Pacific eclipse. The new instrument will use the AIR-Spec optical bench and the proven pointing, tracking, and stabilization optics. A new cryogenic spectro-polarimeter will be built focusing on the strongest emission lines observed during the eclipse. The AIR-Spec IR camera, slit jaw camera and data acquisition system will all be reused. The poster will outline the optical design and the science goals for ASPIRE.

  3. 640 x 480 MWIR and LWIR camera system developments

    NASA Astrophysics Data System (ADS)

    Tower, John R.; Villani, Thomas S.; Esposito, Benjamin J.; Gilmartin, Harvey R.; Levine, Peter A.; Coyle, Peter J.; Davis, Timothy J.; Shallcross, Frank V.; Sauer, Donald J.; Meyerhofer, Dietrich

    1993-01-01

    The performance of a 640 x 480 PtSi, 3,5 microns (MWIR), Stirling cooled camera system with a minimum resolvable temperature of 0.03 is considered. A preliminary specification of a full-TV resolution PtSi radiometer was developed using the measured performance characteristics of the Stirling cooled camera. The radiometer is capable of imaging rapid thermal transients from 25 to 250 C with better than 1 percent temperature resolution. This performance is achieved using the electronic exposure control capability of the MOS focal plane array (FPA). A liquid nitrogen cooled camera with an eight-position filter wheel has been developed using the 640 x 480 PtSi FPA. Low thermal mass packaging for the FPA was developed for Joule-Thomson applications.

  4. 640 x 480 MWIR and LWIR camera system developments

    NASA Astrophysics Data System (ADS)

    Tower, J. R.; Villani, T. S.; Esposito, B. J.; Gilmartin, H. R.; Levine, P. A.; Coyle, P. J.; Davis, T. J.; Shallcross, F. V.; Sauer, D. J.; Meyerhofer, D.

    The performance of a 640 x 480 PtSi, 3,5 microns (MWIR), Stirling cooled camera system with a minimum resolvable temperature of 0.03 is considered. A preliminary specification of a full-TV resolution PtSi radiometer was developed using the measured performance characteristics of the Stirling cooled camera. The radiometer is capable of imaging rapid thermal transients from 25 to 250 C with better than 1 percent temperature resolution. This performance is achieved using the electronic exposure control capability of the MOS focal plane array (FPA). A liquid nitrogen cooled camera with an eight-position filter wheel has been developed using the 640 x 480 PtSi FPA. Low thermal mass packaging for the FPA was developed for Joule-Thomson applications.

  5. Quantitative phase imaging of human red blood cells using phase-shifting white light interference microscopy with colour fringe analysis

    NASA Astrophysics Data System (ADS)

    Singh Mehta, Dalip; Srivastava, Vishal

    2012-11-01

    We report quantitative phase imaging of human red blood cells (RBCs) using phase-shifting interference microscopy. Five phase-shifted white light interferograms are recorded using colour charge coupled device camera. White light interferograms were decomposed into red, green, and blue colour components. The phase-shifted interferograms of each colour were then processed by phase-shifting analysis and phase maps for red, green, and blue colours were reconstructed. Wavelength dependent refractive index profiles of RBCs were computed from the single set of white light interferogram. The present technique has great potential for non-invasive determination of refractive index variation and morphological features of cells and tissues.

  6. The Red Radio Ring: a gravitationally lensed hyperluminous infrared radio galaxy at z = 2.553 discovered through the citizen science project SPACE WARPS

    NASA Astrophysics Data System (ADS)

    Geach, J. E.; More, A.; Verma, A.; Marshall, P. J.; Jackson, N.; Belles, P.-E.; Beswick, R.; Baeten, E.; Chavez, M.; Cornen, C.; Cox, B. E.; Erben, T.; Erickson, N. J.; Garrington, S.; Harrison, P. A.; Harrington, K.; Hughes, D. H.; Ivison, R. J.; Jordan, C.; Lin, Y.-T.; Leauthaud, A.; Lintott, C.; Lynn, S.; Kapadia, A.; Kneib, J.-P.; Macmillan, C.; Makler, M.; Miller, G.; Montaña, A.; Mujica, R.; Muxlow, T.; Narayanan, G.; O'Briain, D.; O'Brien, T.; Oguri, M.; Paget, E.; Parrish, M.; Ross, N. P.; Rozo, E.; Rusu, Cristian E.; Rykoff, E. S.; Sanchez-Argüelles, D.; Simpson, R.; Snyder, C.; Schloerb, F. P.; Tecza, M.; Wang, W.-H.; Van Waerbeke, L.; Wilcox, J.; Viero, M.; Wilson, G. W.; Yun, M. S.; Zeballos, M.

    2015-09-01

    We report the discovery of a gravitationally lensed hyperluminous infrared galaxy (intrinsic LIR ≈ 1013 L⊙) with strong radio emission (intrinsic L1.4 GHz ≈ 1025 W Hz-1) at z = 2.553. The source was identified in the citizen science project SPACE WARPS through the visual inspection of tens of thousands of iJKs colour composite images of luminous red galaxies (LRGs), groups and clusters of galaxies and quasars. Appearing as a partial Einstein ring (re ≈ 3 arcsec) around an LRG at z = 0.2, the galaxy is extremely bright in the sub-millimetre for a cosmological source, with the thermal dust emission approaching 1 Jy at peak. The redshift of the lensed galaxy is determined through the detection of the CO(3→2) molecular emission line with the Large Millimetre Telescope's Redshift Search Receiver and through [O III] and Hα line detections in the near-infrared from Subaru/Infrared Camera and Spectrograph. We have resolved the radio emission with high-resolution (300-400 mas) eMERLIN L-band and Very Large Array C-band imaging. These observations are used in combination with the near-infrared imaging to construct a lens model, which indicates a lensing magnification of μ ≈ 10. The source reconstruction appears to support a radio morphology comprised of a compact (<250 pc) core and more extended component, perhaps indicative of an active nucleus and jet or lobe.

  7. Spitzer Digs Up Galactic Fossil

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    This false-color image taken by NASA's Spitzer Space Telescope shows a globular cluster previously hidden in the dusty plane of our Milky Way galaxy. Globular clusters are compact bundles of old stars that date back to the birth of our galaxy, 13 or so billion years ago. Astronomers use these galactic 'fossils' as tools for studying the age and formation of the Milky Way.

    Most clusters orbit around the center of the galaxy well above its dust-enshrouded disc, or plane, while making brief, repeated passes through the plane that each last about a million years. Spitzer, with infrared eyes that can see into the dusty galactic plane, first spotted the newfound cluster during its current pass. A visible-light image (inset of Figure 1) shows only a dark patch of sky.

    The red streak behind the core of the cluster is a dust cloud, which may indicate the cluster's interaction with the Milky Way. Alternatively, this cloud may lie coincidentally along Spitzer's line of sight.

    Follow-up observations with the University of Wyoming Infrared Observatory helped set the distance of the new cluster at about 9,000 light-years from Earth - closer than most clusters - and set the mass at the equivalent of 300,000 Suns. The cluster's apparent size, as viewed from Earth, is comparable to a grain of rice held at arm's length. It is located in the constellation Aquila.

    Astronomers believe that this cluster may be one of the last in our galaxy to be uncovered.

    This image composite was taken on April 21, 2004, by Spitzer's infrared array camera. It is composed of images obtained at four wavelengths: 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8 microns (red).

    Galactic Fossil Found Behind Curtain of Dust In Figure 2, the image mosaic shows the same patch of sky in various wavelengths of light. While the visible-light image (left) shows a dark sky speckled with stars, infrared images (middle and right), reveal a never-before-seen bundle of stars, called a globular cluster. The left panel is from the California Institute of Technology's Digitized Sky Survey; the middle panel includes images from the NASA-funded Two Micron All-Sky Survey and the University of Wyoming Infrared Observatory (circle inset); and the right panel is from NASA's Spitzer Space Telescope.

    The Two Micron All-Sky Survey false-color image was obtained using near-infrared wavelengths ranging from 1.3 to 2.2 microns. The University of Wyoming Observatory false-color image was captured on July 31, 2004, at wavelengths ranging from 1.2 to 2.2 microns. The Spitzer false-color image composite was taken on April 21, 2004, by its infrared array camera. It is composed of images obtained at four mid-infrared wavelengths: 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8 microns (red).

  8. The current status of red spruce in the eastern United States: distribution, population trends, and environmental drivers

    Treesearch

    Gregory Nowacki; Robert Carr; Michael. Van Dyck

    2010-01-01

    Red spruce (Picea rubens Sarg.) was affected by an array of direct (logging, fire, and grazing) and indirect human activities (acid deposition) over the past centuries. To adequately assess past impacts on red spruce, thus helping frame its restoration potential, requires a clear understanding of its current status. To achieve this, Forest and...

  9. Seasonal and Diel Activity Patterns of Eight Sympatric Mammals in Northern Japan Revealed by an Intensive Camera-Trap Survey.

    PubMed

    Ikeda, Takashi; Uchida, Kenta; Matsuura, Yukiko; Takahashi, Hiroshi; Yoshida, Tsuyoshi; Kaji, Koichi; Koizumi, Itsuro

    2016-01-01

    The activity patterns of mammals are generally categorized as nocturnal, diurnal, crepuscular (active at twilight), and cathemeral (active throughout the day). These patterns are highly variable across regions and seasons even within the same species. However, quantitative data is still lacking, particularly for sympatric species. We monitored the seasonal and diel activity patterns of terrestrial mammals in Hokkaido, Japan. Through an intensive camera-trap survey a total of 13,279 capture events were recorded from eight mammals over 20,344 camera-trap days, i.e., two years. Diel activity patterns were clearly divided into four categories: diurnal (Eurasian red squirrels), nocturnal (raccoon dogs and raccoons), crepuscular (sika deer and mountain hares), and cathemeral (Japanese martens, red foxes, and brown bears). Some crepuscular and cathemeral mammals shifted activity peaks across seasons. Particularly, sika deer changed peaks from twilight during spring-autumn to day-time in winter, possibly because of thermal constraints. Japanese martens were cathemeral during winter-summer, but nocturnal in autumn. We found no clear indication of predator-prey and competitive interactions, suggesting that animal densities are not very high or temporal niche partitioning is absent among the target species. This long-term camera-trap survey was highly cost-effective and provided one of the most detailed seasonal and diel activity patterns in multiple sympatric mammals under natural conditions.

  10. Visualization of Subsurface Defects in Composites using a Focal Plane Array Infrared Camera

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1999-01-01

    A technique for enhanced defect visualization in composites via transient thermography is presented in this paper. The effort targets automated defect map construction for multiple defects located in the observed area. Experimental data were collected on composite panels of different thickness with square inclusions and flat bottom holes of different depth and orientation. The time evolution of the thermal response and spatial thermal profiles are analyzed. The pattern generated by carbon fibers and the vignetting effect of the focal plane array camera make defect visualization difficult. An improvement of the defect visibility is made by the pulse phase technique and the spatial background treatment. The relationship between a size of a defect and its reconstructed image is analyzed as well. The image processing technique for noise reduction is discussed.

  11. The NIKA2 Large Field-of-View Millimeter Continuum Camera for the 30-M IRAM Telescope

    NASA Astrophysics Data System (ADS)

    Monfardini, Alessandro

    2018-01-01

    We have constructed and deployed a multi-thousands pixels dual-band (150 and 260 GHz, respectively 2mm and 1.15mm wavelengths) camera to image an instantaneous field-of-view of 6.5arc-min and configurable to map the linear polarization at 260GHz. We are providing a detailed description of this instrument, named NIKA2 (New IRAM KID Arrays 2), in particular focusing on the cryogenics, the optics, the focal plane arrays based on Kinetic Inductance Detectors (KID) and the readout electronics. We are presenting the performance measured on the sky during the commissioning runs that took place between October 2015 and April 2017 at the 30-meter IRAM (Institute of Millimetric Radio Astronomy) telescope at Pico Veleta, and preliminary science-grade results.

  12. Wide range instantaneous temperature measurements of convective fluid flows by using a schlieren system based in color images

    NASA Astrophysics Data System (ADS)

    Martínez-González, A.; Moreno-Hernández, D.; Monzón-Hernández, D.; León-Rodríguez, M.

    2017-06-01

    In the schlieren method, the deflection of light by the presence of an inhomogeneous medium is proportional to the gradient of its refractive index. Such deflection, in a schlieren system, is represented by light intensity variations on the observation plane. Then, for a digital camera, the intensity level registered by each pixel depends mainly on the variation of the medium refractive index and the status of the digital camera settings. Therefore, in this study, we regulate the intensity value of each pixel by controlling the camera settings such as exposure time, gamma and gain values in order to calibrate the image obtained to the actual temperature values of a particular medium. In our approach, we use a color digital camera. The images obtained with a color digital camera can be separated on three different color-channels. Each channel corresponds to red, green, and blue color, moreover, each one has its own sensitivity. The differences in sensitivity allow us to obtain a range of temperature values for each color channel. Thus, high, medium and low sensitivity correspond to green, blue, and red color channel respectively. Therefore, by adding up the temperature contribution of each color channel we obtain a wide range of temperature values. Hence, the basic idea in our approach to measure temperature, using a schlieren system, is to relate the intensity level of each pixel in a schlieren image to the corresponding knife-edge position measured at the exit focal plane of the system. Our approach was applied to the measurement of instantaneous temperature fields of the air convection caused by a heated rectangular metal plate and a candle flame. We found that for the metal plate temperature measurements only the green and blue color-channels were required to sense the entire phenomena. On the other hand, for the candle case, the three color-channels were needed to obtain a complete measurement of temperature. In our study, the candle temperature was took as reference and it was found that the maximum temperature value obtained for green, blue and red color-channel was ∼275.6, ∼412.9, and ∼501.3 °C, respectively.

  13. Stereo matching and view interpolation based on image domain triangulation.

    PubMed

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  14. Increasing Electrochemiluminescence Intensity of a Wireless Electrode Array Chip by Thousands of Times Using a Diode for Sensitive Visual Detection by a Digital Camera.

    PubMed

    Qi, Liming; Xia, Yong; Qi, Wenjing; Gao, Wenyue; Wu, Fengxia; Xu, Guobao

    2016-01-19

    Both a wireless electrochemiluminescence (ECL) electrode microarray chip and the dramatic increase in ECL by embedding a diode in an electromagnetic receiver coil have been first reported. The newly designed device consists of a chip and a transmitter. The chip has an electromagnetic receiver coil, a mini-diode, and a gold electrode array. The mini-diode can rectify alternating current into direct current and thus enhance ECL intensities by 18 thousand times, enabling a sensitive visual detection using common cameras or smart phones as low cost detectors. The detection limit of hydrogen peroxide using a digital camera is comparable to that using photomultiplier tube (PMT)-based detectors. Coupled with a PMT-based detector, the device can detect luminol with higher sensitivity with linear ranges from 10 nM to 1 mM. Because of the advantages including high sensitivity, high throughput, low cost, high portability, and simplicity, it is promising in point of care testing, drug screening, and high throughput analysis.

  15. A zonal wavefront sensor with multiple detector planes

    NASA Astrophysics Data System (ADS)

    Pathak, Biswajit; Boruah, Bosanta R.

    2018-03-01

    A conventional zonal wavefront sensor estimates the wavefront from the data captured in a single detector plane using a single camera. In this paper, we introduce a zonal wavefront sensor which comprises multiple detector planes instead of a single detector plane. The proposed sensor is based on an array of custom designed plane diffraction gratings followed by a single focusing lens. The laser beam whose wavefront is to be estimated is incident on the grating array and one of the diffracted orders from each grating is focused on the detector plane. The setup, by employing a beam splitter arrangement, facilitates focusing of the diffracted beams on multiple detector planes where multiple cameras can be placed. The use of multiple cameras in the sensor can offer several advantages in the wavefront estimation. For instance, the proposed sensor can provide superior inherent centroid detection accuracy that can not be achieved by the conventional system. It can also provide enhanced dynamic range and reduced crosstalk performance. We present here the results from a proof of principle experimental arrangement that demonstrate the advantages of the proposed wavefront sensing scheme.

  16. A deep proper motion catalog within the Sloan digital sky survey footprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munn, Jeffrey A.; Harris, Hugh C.; Tilleman, Trudy M.

    2014-12-01

    A new proper motion catalog is presented, combining the Sloan Digital Sky Survey (SDSS) with second epoch observations in the r band within a portion of the SDSS imaging footprint. The new observations were obtained with the 90prime camera on the Steward Observatory Bok 90 inch telescope, and the Array Camera on the U.S. Naval Observatory, Flagstaff Station, 1.3 m telescope. The catalog covers 1098 square degrees to r = 22.0, an additional 1521 square degrees to r = 20.9, plus a further 488 square degrees of lesser quality data. Statistical errors in the proper motions range from 5 masmore » year{sup −1} at the bright end to 15 mas year{sup −1} at the faint end, for a typical epoch difference of six years. Systematic errors are estimated to be roughly 1 mas year{sup −1} for the Array Camera data, and as much as 2–4 mas year{sup −1} for the 90prime data (though typically less). The catalog also includes a second epoch of r band photometry.« less

  17. Mitsubishi thermal imager using the 512 x 512 PtSi focal plane arrays

    NASA Astrophysics Data System (ADS)

    Fujino, Shotaro; Miyoshi, Tetsuo; Yokoh, Masataka; Kitahara, Teruyoshi

    1990-01-01

    MITSUBISHI THERMAL IMAGER model IR-5120A is high resolution and high sensitivity infrared television imaging system. It was exhibited in SPIE'S 1988 Technical Symposium on OPTICS, ELECTRO-OPTICS, and SENSORS, held at April 1988 Orlando, and acquired interest of many attendants of the symposium for it's high performance. The detector is a Platinium Silicide Charge Sweep Device (CSD) array containing more than 260,000 individual pixels manufactured by Mitsubishi Electric Co. The IR-5120A consists of a Camera Head. containing the CSD, a stirling cycle cooler and support electronics, and a Camera Control Unit containing the pixel fixed pattern noise corrector, video controllor, cooler driver and support power supplies. The stirling cycle cooler built into the Camera Head is used for keeping CSD temperature of approx. 80K with the features such as light weight, long life of more than 2000 hours and low acoustical noise. This paper describes an improved Thermal Imager, with more light weight, compact size and higher performance, and it's design philosophy, characteristics and field image.

  18. Still from Red Spot Movie

    NASA Image and Video Library

    2000-11-21

    This image is one of seven from the narrow-angle camera on NASA Cassini spacecraft assembled as a brief movie of cloud movements on Jupiter. The smallest features visible are about 500 kilometers about 300 miles across.

  19. Queensland

    Atmospheric Science Data Center

    2013-04-16

    ... of a 157 kilometer x 210 kilometer area. The natural-color image is composed of data from the camera's red, green, and blue bands. In the ... MISR Team. Text acknowledgment: Clare Averill, David J. Diner, Graham Bothwell (Jet Propulsion Laboratory). Other formats ...

  20. Report 11HL: Technologies for Trusted Maritime Situational Awareness

    DTIC Science & Technology

    2011-10-01

    Olympics. The AIS antenna can be seen on the wooden pole to the right. The ASIA camera is contained within the Pelco enclosure (i.e., white case) on...tracks based on GPS and radar. The physical deployment of ASIA, radar and the acoustic array are also shown...the 2010 Vancouver Olympics. The AIS antenna can be seen on the wooden pole to the right. The ASIA camera is contained within the Pelco enclosure

  1. Resonant-enhanced full-color emission of quantum-dot-based micro LED display technology.

    PubMed

    Han, Hau-Vei; Lin, Huang-Yu; Lin, Chien-Chung; Chong, Wing-Cheung; Li, Jie-Ru; Chen, Kuo-Ju; Yu, Peichen; Chen, Teng-Ming; Chen, Huang-Ming; Lau, Kei-May; Kuo, Hao-Chung

    2015-12-14

    Colloidal quantum dots which can emit red, green, and blue colors are incorporated with a micro-LED array to demonstrate a feasible choice for future display technology. The pitch of the micro-LED array is 40 μm, which is sufficient for high-resolution screen applications. The method that was used to spray the quantum dots in such tight space is called Aerosol Jet technology which uses atomizer and gas flow control to obtain uniform and controlled narrow spots. The ultra-violet LEDs are used in the array to excite the red, green and blue quantum dots on the top surface. To increase the utilization of the UV photons, a layer of distributed Bragg reflector was laid down on the device to reflect most of the leaked UV photons back to the quantum dot layers. With this mechanism, the enhanced luminous flux is 194% (blue), 173% (green) and 183% (red) more than that of the samples without the reflector. The luminous efficacy of radiation (LER) was measured under various currents and a value of 165 lm/Watt was recorded.

  2. The Spectral Energy Distributions of z ~ 8 Galaxies from the IRAC Ultra Deep Fields: Emission Lines, Stellar Masses, and Specific Star Formation Rates at 650 Myr

    NASA Astrophysics Data System (ADS)

    Labbé, I.; Oesch, P. A.; Bouwens, R. J.; Illingworth, G. D.; Magee, D.; González, V.; Carollo, C. M.; Franx, M.; Trenti, M.; van Dokkum, P. G.; Stiavelli, M.

    2013-11-01

    Using new ultradeep Spitzer/InfraRed Array Camera (IRAC) photometry from the IRAC Ultra Deep Field program, we investigate the stellar populations of a sample of 63 Y-dropout galaxy candidates at z ~ 8, only 650 Myr after the big bang. The sources are selected from HST/ACS+WFC3/IR data over the Hubble Ultra Deep Field (HUDF), two HUDF parallel fields, and wide area data over the CANDELS/GOODS-South. The new Spitzer/IRAC data increase the coverage in [3.6] and [4.5] to ~120h over the HUDF reaching depths of ~28 (AB,1σ). The improved depth and inclusion of brighter candidates result in direct >=3σ InfraRed Array Camera (IRAC) detections of 20/63 sources, of which 11/63 are detected at >=5σ. The average [3.6]-[4.5] colors of IRAC detected galaxies at z ~ 8 are markedly redder than those at z ~ 7, observed only 130 Myr later. The simplest explanation is that we witness strong rest-frame optical emission lines (in particular [O III] λλ4959, 5007 + Hβ) moving through the IRAC bandpasses with redshift. Assuming that the average rest-frame spectrum is the same at both z ~ 7 and z ~ 8 we estimate a rest-frame equivalent width of {W}_{[O\\,\\scriptsize{III}]\\ \\lambda \\lambda 4959,5007+H\\beta }=670^{+260}_{-170} Å contributing 0.56^{+0.16}_{-0.11} mag to the [4.5] filter at z ~ 8. The corresponding {W}_{H\\alpha }=430^{+160}_{-110} Å implies an average specific star formation rate of sSFR=11_{-5}^{+11} Gyr-1 and a stellar population age of 100_{-50}^{+100} Myr. Correcting the spectral energy distribution for the contribution of emission lines lowers the average best-fit stellar masses and mass-to-light ratios by ~3 ×, decreasing the integrated stellar mass density to \\rho ^*(z=8,M_{\\rm{UV}}<-18)=0.6^{+0.4}_{-0.3}\\times 10^6 \\,M_\\odot Mpc-3. Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs #11563, 9797. Based on observations with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407. Support for this work was provided by NASA through contract 125790 issued by JPL/Caltech. Based on service mode observations collected at the European Southern Observatory, Paranal, Chile (ESO Program 073.A-0764A). Based on data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.

  3. Adaptive DOF for plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Oberdörster, Alexander; Lensch, Hendrik P. A.

    2013-03-01

    Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.

  4. Camera Concepts for the Advanced Gamma-Ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Nepomuk Otte, Adam

    2009-05-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. The incorporation of trigger electronics and signal digitization into the camera are under study. Given the size of AGIS, the camera must be reliable, robust, and cost effective. We are investigating several directions that include innovative technologies such as Geiger-mode avalanche-photodiodes as a possible detector and switched capacitor arrays for the digitization.

  5. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  6. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  7. Design and evaluation of a filter spectrometer concept for facsimile cameras

    NASA Technical Reports Server (NTRS)

    Kelly, W. L., IV; Jobson, D. J.; Rowland, C. W.

    1974-01-01

    The facsimile camera is an optical-mechanical scanning device which was selected as the imaging system for the Viking '75 lander missions to Mars. A concept which uses an interference filter-photosensor array to integrate a spectrometric capability with the basic imagery function of this camera was proposed for possible application to future missions. This paper is concerned with the design and evaluation of critical electronic circuits and components that are required to implement this concept. The feasibility of obtaining spectroradiometric data is demonstrated, and the performance of a laboratory model is described in terms of spectral range, angular and spectral resolution, and noise-equivalent radiance.

  8. The Advanced Gamma-ray Imaging System (AGIS) - Camera Electronics Development

    NASA Astrophysics Data System (ADS)

    Tajima, Hiroyasu; Bechtol, K.; Buehler, R.; Buckley, J.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Hanna, D.; Horan, D.; Humensky, B.; Karlsson, N.; Kieda, D.; Konopelko, A.; Krawczynski, H.; Krennrich, F.; Mukherjee, R.; Ong, R.; Otte, N.; Quinn, J.; Schroedter, M.; Swordy, S.; Wagner, R.; Wakely, S.; Weinstein, A.; Williams, D.; Camera Working Group; AGIS Collaboration

    2010-03-01

    AGIS, a next-generation imaging atmospheric Cherenkov telescope (IACT) array, aims to achieve a sensitivity level of about one milliCrab for gamma-ray observations in the energy band of 50 GeV to 100 TeV. Achieving this level of performance will require on the order of 50 telescopes with perhaps as many as 1M total electronics channels. The larger scale of AGIS requires a very different approach from the currently operating IACTs, with lower-cost and lower-power electronics incorporated into camera modules designed for high reliability and easy maintenance. Here we present the concept and development status of the AGIS camera electronics.

  9. Gallium arsenide quantum well-based far infrared array radiometric imager

    NASA Technical Reports Server (NTRS)

    Forrest, Kathrine A.; Jhabvala, Murzy D.

    1991-01-01

    We have built an array-based camera (FIRARI) for thermal imaging (lambda = 8 to 12 microns). FIRARI uses a square format 128 by 128 element array of aluminum gallium arsenide quantum well detectors that are indium bump bonded to a high capacity silicon multiplexer. The quantum well detectors offer good responsivity along with high response and noise uniformity, resulting in excellent thermal images without compensation for variation in pixel response. A noise equivalent temperature difference of 0.02 K at a scene temperature of 290 K was achieved with the array operating at 60 K. FIRARI demonstrated that AlGaAS quantum well detector technology can provide large format arrays with performance superior to mercury cadmium telluride at far less cost.

  10. Solid state television camera (CCD-buried channel)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  11. Solid state television camera (CCD-buried channel), revision 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  12. Astronauts Thornton & Akers on HST photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-05

    S61-E-012 (5 Dec 1993) --- This view of astronauts Kathryn C. Thornton (top) and Thomas D. Akers working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is teaming with Akers to install the +V2 Solar Array Panel as a replacement for the original one removed earlier. Akers uses tethers and a foot restraint to remain in position for the task. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  13. Astronauts Thornton & Akers on HST photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-05

    S61-E-014 (5 Dec 1993) --- This view of astronauts Kathryn C. Thornton (bottom) and Thomas D. Akers working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is teaming with Akers to install the +V2 Solar Array Panel as a replacement for the original one removed earlier. Akers uses tethers and a foot restraint to remain in position for the task. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  14. Solid state, CCD-buried channel, television camera study and design

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.; Balopole, H.

    1976-01-01

    An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

  15. Analysis of crystalline lens coloration using a black and white charge-coupled device camera.

    PubMed

    Sakamoto, Y; Sasaki, K; Kojima, M

    1994-01-01

    To analyze lens coloration in vivo, we used a new type of Scheimpflug camera that is a black and white type of charge-coupled device (CCD) camera. A new methodology was proposed. Scheimpflug images of the lens were taken three times through red (R), green (G), and blue (B) filters, respectively. Three images corresponding with the R, G, and B channels were combined into one image on the cathode-ray tube (CRT) display. The spectral transmittance of the tricolor filters and the spectral sensitivity of the CCD camera were used to correct the scattering-light intensity of each image. Coloration of the lens was expressed on a CIE standard chromaticity diagram. The lens coloration of seven eyes analyzed by this method showed values almost the same as those obtained by the previous method using color film.

  16. Automated enforcement and highway safety : final report.

    DOT National Transportation Integrated Search

    2013-11-01

    The objectives of the Automated Enforcement and Highway Safety Research study were to conduct a : literature review of national research related to the effectiveness of Red Light Camera (RLC) programs : in changing crash frequency, crash severity, cr...

  17. Yugoslavia

    Atmospheric Science Data Center

    2013-04-17

    ... Image These Multi-angle Imaging SpectroRadiometer (MISR) nadir camera images of Yugoslavia were acquired on July 28, 2000 during ... typically bright as a result of reflection from the plants' cell walls, to the brightness in the red. In the middle "false color" image, ...

  18. Beam line shielding calculations for an Electron Accelerator Mo-99 production facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mocko, Michal

    2016-05-03

    The purpose of this study is to evaluate the photon and neutron fields in and around the latest beam line design for the Mo-99 production facility. The radiation dose to the beam line components (quadrupoles, dipoles, beam stops and the linear accelerator) are calculated in the present report. The beam line design assumes placement of two cameras: infra red (IR) and optical transition radiation (OTR) for continuous monitoring of the beam spot on target during irradiation. The cameras will be placed off the beam axis offset in vertical direction. We explored typical shielding arrangements for the cameras and report themore » resulting neutron and photon dose fields.« less

  19. Performance and Calibration of H2RG Detectors and SIDECAR ASICs for the RATIR Camera

    NASA Technical Reports Server (NTRS)

    Fox, Ori D.; Kutyrev, Alexander S.; Rapchun, David A.; Klein, Christopher R.; Butler, Nathaniel R.; Bloom, Josh; de Diego, Jos A.; Simn Farah, Alejandro D.; Gehrels, Neil A.; Georgiev, Leonid; hide

    2012-01-01

    The Reionization And Transient Infra,.Red (RATIR) camera has been built for rapid Gamma,.Ray Burst (GRE) followup and will provide simultaneous optical and infrared photometric capabilities. The infrared portion of this camera incorporates two Teledyne HgCdTe HAWAII-2RG detectors, controlled by Teledyne's SIDECAR ASICs. While other ground-based systems have used the SIDECAR before, this system also utilizes Teledyne's JADE2 interface card and IDE development environment. Together, this setup comprises Teledyne's Development Kit, which is a bundled solution that can be efficiently integrated into future ground-based systems. In this presentation, we characterize the system's read noise, dark current, and conversion gain.

  20. Non-invasive Self-Care Anemia Detection during Pregnancy Using a Smartphone Camera

    NASA Astrophysics Data System (ADS)

    Anggraeni, M. D.; Fatoni, A.

    2017-02-01

    Indonesian maternal mortality rate is the highest in South East Asia. Postpartum hemorrhage is the major causes of maternal mortality in Indonesia. Anemia during pregnancy contributes significantly to postpartum hemorrhage. Early detection of anemia during pregnancy may save mothers from maternal death. This research aim to develop a non-invasive self-care anemia detection based on the palpebral color observation and using a smartphone camera. The color intensity (Red, Green, and Blue) was then measured using a Colorgrab software (Loomatix) and analyzed compared to the hemoglobin concentration of the samples, measured using standard Spectrophotometer method. The result showed that the red color intensity had a high correlation (R2=0.814) with a linear regression of y=14.486x + 50.228. This preliminary study may be used as anemia early detection which more objective compared to visual assessment usually performed.

  1. [Nitrogen stress measurement of canola based on multi-spectral charged coupled device imaging sensor].

    PubMed

    Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong

    2006-09-01

    Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.

  2. An epifluorescent attachment improves whole-plant digital photography of Arabidopsis thaliana expressing red-shifted green fluorescent protein

    PubMed Central

    Baker, Stokes S.; Vidican, Cleo B.; Cameron, David S.; Greib, Haittam G.; Jarocki, Christine C.; Setaputri, Andres W.; Spicuzza, Christopher H.; Burr, Aaron A.; Waqas, Meriam A.; Tolbert, Danzell A.

    2012-01-01

    Background and aims Studies have shown that levels of green fluorescent protein (GFP) leaf surface fluorescence are directly proportional to GFP soluble protein concentration in transgenic plants. However, instruments that measure GFP surface fluorescence are expensive. The goal of this investigation was to develop techniques with consumer digital cameras to analyse GFP surface fluorescence in transgenic plants. Methodology Inexpensive filter cubes containing machine vision dichroic filters and illuminated with blue light-emitting diodes (LED) were designed to attach to digital single-lens reflex (SLR) camera macro lenses. The apparatus was tested on purified enhanced GFP, and on wild-type and GFP-expressing arabidopsis grown autotrophically and heterotrophically. Principal findings Spectrum analysis showed that the apparatus illuminates specimens with wavelengths between ∼450 and ∼500 nm, and detects fluorescence between ∼510 and ∼595 nm. Epifluorescent photographs taken with SLR digital cameras were able to detect red-shifted GFP fluorescence in Arabidopsis thaliana leaves and cotyledons of pot-grown plants, as well as roots, hypocotyls and cotyledons of etiolated and light-grown plants grown heterotrophically. Green fluorescent protein fluorescence was detected primarily in the green channel of the raw image files. Studies with purified GFP produced linear responses to both protein surface density and exposure time (H0: β (slope) = 0 mean counts per pixel (ng s mm−2)−1, r2 > 0.994, n = 31, P < 1.75 × 10−29). Conclusions Epifluorescent digital photographs taken with complementary metal-oxide-semiconductor and charge-coupled device SLR cameras can be used to analyse red-shifted GFP surface fluorescence using visible blue light. This detection device can be constructed with inexpensive commercially available materials, thus increasing the accessibility of whole-organism GFP expression analysis to research laboratories and teaching institutions with small budgets. PMID:22479674

  3. Graphical User Interface for a Dual-Module EMCCD X-ray Detector Array.

    PubMed

    Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen

    2011-03-16

    A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000× to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2k×1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.

  4. Temperature-Sensitive Coating Sensor Based on Hematite

    NASA Technical Reports Server (NTRS)

    Bencic, Timothy J.

    2011-01-01

    A temperature-sensitive coating, based on hematite (iron III oxide), has been developed to measure surface temperature using spectral techniques. The hematite powder is added to a binder that allows the mixture to be painted on the surface of a test specimen. The coating dynamically changes its relative spectral makeup or color with changes in temperature. The color changes from a reddish-brown appearance at room temperature (25 C) to a black-gray appearance at temperatures around 600 C. The color change is reversible and repeatable with temperature cycling from low to high and back to low temperatures. Detection of the spectral changes can be recorded by different sensors, including spectrometers, photodiodes, and cameras. Using a-priori information obtained through calibration experiments in known thermal environments, the color change can then be calibrated to yield accurate quantitative temperature information. Temperature information can be obtained at a point, or over an entire surface, depending on the type of equipment used for data acquisition. Because this innovation uses spectrophotometry principles of operation, rather than the current methods, which use photoluminescence principles, white light can be used for illumination rather than high-intensity short wavelength excitation. The generation of high-intensity white (or potentially filtered long wavelength light) is much easier, and is used more prevalently for photography and video technologies. In outdoor tests, the Sun can be used for short durations as an illumination source as long as the amplitude remains relatively constant. The reflected light is also much higher in intensity than the emitted light from the inefficient current methods. Having a much brighter surface allows a wider array of detection schemes and devices. Because color change is the principle of operation, the development of high-quality, lower-cost digital cameras can be used for detection, as opposed to the high-cost imagers needed for intensity measurements with the current methods. Alternative methods of detection are possible to increase the measurement sensitivity. For example, a monochrome camera can be used with an appropriate filter and a radiometric measurement of normalized intensity change that is proportional to the change coating temperature. Using different spectral regions yields different sensitivities and calibration curves for converting intensity change to temperature units. Alternatively, using a color camera, a ratio of the standard red, green, and blue outputs can be used as a self-referenced change. The blue region (less than 500 nm) does not change nearly as much as the red region (greater than 575 nm), so a ratio of color intensities will yield a calibrated temperature image. The new temperature sensor coating is easy to apply, is inexpensive, can contour complex shape surfaces, and can be a global surface measurement system based on spectrophotometry. The color change, or relative intensity change, at different colors makes the optical detection under white light illumination, and associated interpretation, much easier to measure and interpret than in the detection systems of the current methods.

  5. Optical Characterization of the SPT-3G Camera

    NASA Astrophysics Data System (ADS)

    Pan, Z.; Ade, P. A. R.; Ahmed, Z.; Anderson, A. J.; Austermann, J. E.; Avva, J. S.; Thakur, R. Basu; Bender, A. N.; Benson, B. A.; Carlstrom, J. E.; Carter, F. W.; Cecil, T.; Chang, C. L.; Cliche, J. F.; Cukierman, A.; Denison, E. V.; de Haan, T.; Ding, J.; Dobbs, M. A.; Dutcher, D.; Everett, W.; Foster, A.; Gannon, R. N.; Gilbert, A.; Groh, J. C.; Halverson, N. W.; Harke-Hosemann, A. H.; Harrington, N. L.; Henning, J. W.; Hilton, G. C.; Holzapfel, W. L.; Huang, N.; Irwin, K. D.; Jeong, O. B.; Jonas, M.; Khaire, T.; Kofman, A. M.; Korman, M.; Kubik, D.; Kuhlmann, S.; Kuo, C. L.; Lee, A. T.; Lowitz, A. E.; Meyer, S. S.; Michalik, D.; Montgomery, J.; Nadolski, A.; Natoli, T.; Nguyen, H.; Noble, G. I.; Novosad, V.; Padin, S.; Pearson, J.; Posada, C. M.; Rahlin, A.; Ruhl, J. E.; Saunders, L. J.; Sayre, J. T.; Shirley, I.; Shirokoff, E.; Smecher, G.; Sobrin, J. A.; Stark, A. A.; Story, K. T.; Suzuki, A.; Tang, Q. Y.; Thompson, K. L.; Tucker, C.; Vale, L. R.; Vanderlinde, K.; Vieira, J. D.; Wang, G.; Whitehorn, N.; Yefremenko, V.; Yoon, K. W.; Young, M. R.

    2018-05-01

    The third-generation South Pole Telescope camera is designed to measure the cosmic microwave background across three frequency bands (centered at 95, 150 and 220 GHz) with ˜ 16,000 transition-edge sensor (TES) bolometers. Each multichroic array element on a detector wafer has a broadband sinuous antenna that couples power to six TESs, one for each of the three observing bands and both polarizations, via lumped element filters. Ten detector wafers populate the detector array, which is coupled to the sky via a large-aperture optical system. Here we present the frequency band characterization with Fourier transform spectroscopy, measurements of optical time constants, beam properties, and optical and polarization efficiencies of the detector array. The detectors have frequency bands consistent with our simulations and have high average optical efficiency which is 86, 77 and 66% for the 95, 150 and 220 GHz detectors. The time constants of the detectors are mostly between 0.5 and 5 ms. The beam is round with the correct size, and the polarization efficiency is more than 90% for most of the bolometers.

  6. The single mirror small size telescope (SST-1M) of the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Aguilar, J. A.; Bilnik, W.; Borkowski, J.; Cadoux, F.; Christov, A.; della Volpe, D.; Favre, Y.; Heller, M.; Kasperek, J.; Lyard, E.; Marszałek, A.; Moderski, R.; Montaruli, T.; Porcelli, A.; Prandini, E.; Rajda, P.; Rameez, M.; Schioppa, E., Jr.; Troyano Pujadas, I.; Zietara, K.; Blocki, J.; Bogacz, L.; Bulik, T.; Frankowski, A.; Grudzinska, M.; Idźkowski, B.; Jamrozy, M.; Janiak, M.; Lalik, K.; Mach, E.; Mandat, D.; Michałowski, J.; Neronov, A.; Niemiec, J.; Ostrowski, M.; Paśko, P.; Pech, M.; Schovanek, P.; Seweryn, K.; Skowron, K.; Sliusar, V.; Stawarz, L.; Stodulska, M.; Stodulski, M.; Toscano, S.; Walter, R.; WiÈ©cek, M.; Zagdański, A.

    2016-07-01

    The Small Size Telescope with Single Mirror (SST-1M) is one of the proposed types of Small Size Telescopes (SST) for the Cherenkov Telescope Array (CTA). The CTA south array will be composed of about 100 telescopes, out of which about 70 are of SST class, which are optimized for the detection of gamma rays in the energy range from 5 TeV to 300 TeV. The SST-1M implements a Davies-Cotton optics with a 4 m dish diameter with a field of view of 9°. The Cherenkov light produced in atmospheric showers is focused onto a 88 cm wide hexagonal photo-detection plane, composed of 1296 custom designed large area hexagonal silicon photomultipliers (SiPM) and a fully digital readout and trigger system. The SST-1M camera has been designed to provide high performance in a robust as well as compact and lightweight design. In this contribution, we review the different steps that led to the realization of the telescope prototype and its innovative camera.

  7. Expected progress based on aluminium galium nitride Focal Plan Array for near and deep Ultraviolet

    NASA Astrophysics Data System (ADS)

    Reverchon, J.-L.; Robin, K.; Bansropun, S.; Gourdel, Y.; Robo, J.-A.; Truffer, J.-P.; Costard, E.; Brault, J.; Frayssinet, E.; Duboz, J.-Y.

    The fast development of nitrides has given the opportunity to investigate AlGaN as a material for ultraviolet detection. A camera based on such a material presents an extremely low dark current at room temperature. It can compete with technologies based on photocathodes, MCP intensifiers, back thinned CCD or hybrid CMOS focal plane arrays for low flux measurements. First, we will present results on focal plane array of 320 × 256 pixels with a pitch of 30 μm. The peak responsivity is tuned from 260 nm to 360 nm in different cameras. All these results are obtained in a standard SWIR supply chaine and with AlGaN Schottky diodes grown on sapphire. We will present here the first attempts to transfer the standard design Schottky photodiodes on from sapphire to silicon substrates. We will show the capability to remove the silicon substrate, to etch the window layer in order to extend the band width to lower wavelength and to maintain the AlGaN membrane integrity.

  8. P6 Truss, starboard PV solar array wing deployment

    NASA Image and Video Library

    2000-12-03

    STS097-373-005 (3 December 2000) --- Backdropped against the blackness of space, the deployment of International Space Station (ISS) solar array was photographed with a 35mm camera by astronaut Carlos I. Noriega, mission specialist. Part of the extravehicular mobility unit (EMU) attached to astronaut Joseph R. Tanner, mission specialist, is visible at bottom center. Tanner and Noriega went on to participate together in three separate space walks.

  9. Efficient Feature Extraction and Likelihood Fusion for Vehicle Tracking in Low Frame Rate Airborne Video

    DTIC Science & Technology

    2010-07-01

    imagery, persistent sensor array I. Introduction New device fabrication technologies and heterogeneous embedded processors have led to the emergence of a...geometric occlusions between target and sensor , motion blur, urban scene complexity, and high data volumes. In practical terms the targets are small...distributed airborne narrow-field-of-view video sensor networks. Airborne camera arrays combined with com- putational photography techniques enable the

  10. Design and fabrication of two-dimensional semiconducting bolometer arrays for HAWC and SHARC-II

    NASA Astrophysics Data System (ADS)

    Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. D.; Harper, D. A.; Jhabvala, Murzy D.; Moseley, S. H.; Rennick, Timothy; Shirron, Peter J.; Smith, W. W.; Staguhn, Johannes G.

    2003-02-01

    The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC II) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC "Pop-Up" Detectors (PUD's) use a unique folding technique to enable a 12 × 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 × 32-element array. Engineering results from the first light run of SHARC II at the Caltech Submillimeter Observatory (CSO) are presented.

  11. Affordable CZT SPECT with dose-time minimization (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hugg, James W.; Harris, Brian W.; Radley, Ian

    2017-03-01

    PURPOSE Pixelated CdZnTe (CZT) detector arrays are used in molecular imaging applications that can enable precision medicine, including small-animal SPECT, cardiac SPECT, molecular breast imaging (MBI), and general purpose SPECT. The interplay of gamma camera, collimator, gantry motion, and image reconstruction determines image quality and dose-time-FOV tradeoffs. Both dose and exam time can be minimized without compromising diagnostic content. METHODS Integration of pixelated CZT detectors with advanced ASICs and readout electronics improves system performance. Because historically CZT was expensive, the first clinical applications were limited to small FOV. Radiation doses were initially high and exam times long. Advances have significantly improved efficiency of CZT-based molecular imaging systems and the cost has steadily declined. We have built a general purpose SPECT system using our 40 cm x 53 cm CZT gamma camera with 2 mm pixel pitch and characterized system performance. RESULTS Compared to NaI scintillator gamma cameras: intrinsic spatial resolution improved from 3.8 mm to 2.0 mm; energy resolution improved from 9.8% to <4 % at 140 keV; maximum count rate is <1.5 times higher; non-detection camera edges are reduced 3-fold. Scattered photons are greatly reduced in the photopeak energy window; image contrast is improved; and the optimal FOV is increased to the entire camera area. CONCLUSION Continual improvements in CZT detector arrays for molecular imaging, coupled with optimal collimator and image reconstruction, result in minimized dose and exam time. With CZT cost improving, affordable whole-body CZT general purpose SPECT is expected to enable precision medicine applications.

  12. The prototype cameras for trans-Neptunian automatic occultation survey

    NASA Astrophysics Data System (ADS)

    Wang, Shiang-Yu; Ling, Hung-Hsu; Hu, Yen-Sang; Geary, John C.; Chang, Yin-Chang; Chen, Hsin-Yo; Amato, Stephen M.; Huang, Pin-Jie; Pratlong, Jerome; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy; Jorden, Paul

    2016-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by TransNeptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degrees diameter field of view of the 1.3m telescope with 10 mosaic 4.5k×2k CMOS sensors. The new CMOS sensor (CIS 113) has a back illumination thinned structure and high sensitivity to provide similar performance to that of the back-illumination thinned CCDs. Due to the requirements of high performance and high speed, the development of the new CMOS sensor is still in progress. Before the science arrays are delivered, a prototype camera is developed to help on the commissioning of the robotic telescope system. The prototype camera uses the small format e2v CIS 107 device but with the same dewar and also the similar control electronics as the TAOS II science camera. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K as the science array by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. One FPGA is needed to control and process the signal from a CMOS sensor for 20Hz region of interests (ROI) readout.

  13. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  14. An autonomous sensor module based on a legacy CCTV camera

    NASA Astrophysics Data System (ADS)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  15. Stellar Snowflake Cluster

    NASA Image and Video Library

    2005-12-22

    Newborn stars, hidden behind thick dust, are revealed in this image of a section of the Christmas Tree cluster from NASA Spitzer Space Telescope, created in joint effort between Spitzer infrared array camera and multiband imaging photometer instrument

  16. Broadband image sensor array based on graphene-CMOS integration

    NASA Astrophysics Data System (ADS)

    Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank

    2017-06-01

    Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.

  17. Neutron camera employing row and column summations

    DOEpatents

    Clonts, Lloyd G.; Diawara, Yacouba; Donahue, Jr, Cornelius; Montcalm, Christopher A.; Riedel, Richard A.; Visscher, Theodore

    2016-06-14

    For each photomultiplier tube in an Anger camera, an R.times.S array of preamplifiers is provided to detect electrons generated within the photomultiplier tube. The outputs of the preamplifiers are digitized to measure the magnitude of the signals from each preamplifier. For each photomultiplier tube, a corresponding summation circuitry including R row summation circuits and S column summation circuits numerically add the magnitudes of the signals from preamplifiers for each row and for each column to generate histograms. For a P.times.Q array of photomultiplier tubes, P.times.Q summation circuitries generate P.times.Q row histograms including R entries and P.times.Q column histograms including S entries. The total set of histograms include P.times.Q.times.(R+S) entries, which can be analyzed by a position calculation circuit to determine the locations of events (detection of a neutron).

  18. The NIKA2 Instrument at 30-m IRAM Telescope: Performance and Results

    NASA Astrophysics Data System (ADS)

    Catalano, A.; Adam, R.; Ade, P. A. R.; André, P.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Comis, B.; De Petris, M.; Désert, F.-X.; Doyle, S.; Driessen, E. F. C.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Lestrade, J.-F.; Macías-Pérez, J. F.; Mauskopf, P.; Mayet, F.; Monfardini, A.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Ritacco, A.; Romero, C.; Roussel, H.; Ruppin, F.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.; Barria, E.; Bres, G.; Camus, P.; Chanthib, P.; Donnier-Valentin, G.; Exshaw, O.; Garde, G.; Gerardin, A.; Leggeri, J.-P.; Levy-Bertrand, F.; Guttin, C.; Hoarau, C.; Grollier, M.; Mocellin, J.-L.; Pont, G.; Rodenas, H.; Tissot, O.; Galvez, G.; John, D.; Ungerechts, H.; Sanchez, S.; Mellado, P.; Munoz, M.; Pierfederici, F.; Penalver, J.; Navarro, S.; Bosson, G.; Bouly, J.-L.; Bouvier, J.; Geraci, C.; Li, C.; Menu, J.; Ponchant, N.; Roni, S.; Roudier, S.; Scordillis, J. P.; Tourres, D.; Vescovi, C.; Barbier, A.; Billon-Pierron, D.; Adane, A.; Andrianasolo, A.; Bracco, A.; Coiffard, G.; Evans, R.; Maury, A.; Rigby, A.

    2018-03-01

    The New IRAM KID Arrays 2 (NIKA2) consortium has just finished installing and commissioning a millimetre camera on the IRAM 30-m telescope. It is a dual-band camera operating with three frequency-multiplexed kilo-pixels arrays of lumped element kinetic inductance detectors (LEKID) cooled at 150 mK, designed to observe the intensity and polarisation of the sky at 260 and 150 GHz (1.15 and 2 mm). NIKA2 is today an IRAM resident instrument for millimetre astronomy, such as intracluster medium from intermediate to distant clusters and so for the follow-up of Planck satellite detected clusters, high redshift sources and quasars, early stages of star formation and nearby galaxies emission. We present an overview of the instrument performance as it has been evaluated at the end of the commissioning phase.

  19. Modified plenoptic camera for phase and amplitude wavefront sensing

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Davis, Christopher C.

    2013-09-01

    Shack-Hartmann sensors have been widely applied in wavefront sensing. However, they are limited to measuring slightly distorted wavefronts whose local tilt doesn't surpass the numerical aperture of its micro-lens array and cross talk of incident waves on the mrcro-lens array should be strictly avoided. In medium to strong turbulence cases of optic communication, where large jitter in angle of arrival and local interference caused by break-up of beam are common phenomena, Shack-Hartmann sensors no longer serve as effective tools in revealing distortions in a signal wave. Our design of a modified Plenoptic Camera shows great potential in observing and extracting useful information from severely disturbed wavefronts. Furthermore, by separating complex interference patterns into several minor interference cases, it may also be capable of telling regional phase difference of coherently illuminated objects.

  20. Low power multi-camera system and algorithms for automated threat detection

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  1. Prolonged Orientation to Pictorial Novelty in Severely Speech-Disordered Children. Papers and Reports on Child Language Development, No. 4.

    ERIC Educational Resources Information Center

    Mackworth, Norman H.; And Others

    1972-01-01

    The Mackworth wide-angle reflection eye camera was used to record the position of the gaze on a display of 16 white symbols. One of these symbols changed to red after 30 seconds, remained red for a minute of testing, and then became white again. The subjects were 10 aphasic children (aged 5-9), who were compared with a group of 10 normal children,…

  2. Status of the JWST Science Instrument Payload

    NASA Technical Reports Server (NTRS)

    Greenhouse, Matt

    2016-01-01

    The James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) system consists of five sensors (4 science): Mid-Infrared Instrument (MIRI), Near Infrared Imager and Slitless Spectrograph (NIRISS), Fine Guidance Sensor (FGS), Near InfraRed Camera (NIRCam), Near InfraRed Spectrograph (NIRSpec); and nine instrument support systems: Optical metering structure system, Electrical Harness System; Harness Radiator System, ISIM Electronics Compartment, ISIM Remote Services Unit, Cryogenic Thermal Control System, Command and Data Handling System, Flight Software System, Operations Scripts System.

  3. High resolution CsI(Tl)/Si-PIN detector development for breast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.E.; Iwanczyk, J.S.; Tull, C.R.

    High resolution multi-element (8x8) imaging arrays with collimators, size matched to discrete CsI(Tl) scintillator arrays and Si-PIN photodetector arrays (PDA`s) were developed as prototypes for larger arrays for breast imaging. Photodetector pixels were each 1.5 {times} 1.5 mm{sup 2} with 0.25 mm gaps. A 16-element quadrant of the detector was evaluated with a segmented CsI(Tl) scintillator array coupled to the silicon array. The scintillator thickness of 6 mm corresponds to >85% total gamma efficiency at 140 keV. Pixel energy resolution of <8% FWHM was obtained for Tc-99m. Electronic noise was 41 e{sup {minus}} RMS corresponding to a 3% FWHM contributionmore » to the 140 keV photopeak. Detection efficiency uniformity measured with a Tc-99m flood source was 4.3% for an {approximately}10% energy photopeak window. Spatial resolution was 1.53 mm FWHM and pitch was 1.75 mm as measured from the Co-57 (122 keV) line spread function. Signal to background was 34 and contrast was 0.94. The energy resolution and spatial characteristics of the new imaging detector exceed those of other scintillator based imaging detectors. A camera based on this technology will allow: (1) Improved Compton scatter rejection; (2) Detector positioning in close proximity to the breast to increase signal to noise; (3) Improved spatial resolution; and (4) Improved efficiency compared to high resolution collimated gamma cameras for the anticipated compressed breast geometries.« less

  4. Simultaneous operation and control of about 100 telescopes for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Wegner, P.; Colomé, J.; Hoffmann, D.; Houles, J.; Köppel, H.; Lamanna, G.; Le Flour, T.; Lopatin, A.; Lyard, E.; Melkumyan, D.; Oya, I.; Panazol, L.-I.; Punch, M.; Schlenstedt, S.; Schmidt, T.; Stegmann, C.; Schwanke, U.; Walter, R.; Consortium, CTA

    2012-12-01

    The Cherenkov Telescope Array (CTA) project is an initiative to build the next generation ground-based very high energy (VHE) gamma-ray instrument. Compared to current imaging atmospheric Cherenkov telescope experiments CTA will extend the energy range and improve the angular resolution while increasing the sensitivity up to a factor of 10. With about 100 separate telescopes it will be operated as an observatory open to a wide astrophysics and particle physics community, providing a deep insight into the non-thermal high-energy universe. The CTA Array Control system (ACTL) is responsible for several essential control tasks supporting the evaluation and selection of proposals, as well as the preparation, scheduling, and finally the execution of observations with the array. A possible basic distributed software framework for ACTL being considered is the ALMA Common Software (ACS). The ACS framework follows a container component model and contains a high level abstraction layer to integrate different types of device. To achieve a low-level consolidation of connecting control hardware, OPC UA (OPen Connectivity-Unified Architecture) client functionality is integrated directly into ACS, thus allowing interaction with other OPC UA capable hardware. The CTA Data Acquisition System comprises the data readout of all cameras and the transfer of the data to a camera server farm, thereby using standard hardware and software technologies. CTA array control is also covering conceptions for a possible array trigger system and the corresponding clock distribution. The design of the CTA observations scheduler is introducing new algorithmic technologies to achieve the required flexibility.

  5. A Flight Photon Counting Camera for the WFIRST Coronagraph

    NASA Astrophysics Data System (ADS)

    Morrissey, Patrick

    2018-01-01

    A photon counting camera based on the Teledyne-e2v CCD201-20 electron multiplying CCD (EMCCD) is being developed for the NASA WFIRST coronagraph, an exoplanet imaging technology development of the Jet Propulsion Laboratory (Pasadena, CA) that is scheduled to launch in 2026. The coronagraph is designed to directly image planets around nearby stars, and to characterize their spectra. The planets are exceedingly faint, providing signals similar to the detector dark current, and require the use of photon counting detectors. Red sensitivity (600-980nm) is preferred to capture spectral features of interest. Since radiation in space affects the ability of the EMCCD to transfer the required single electron signals, care has been taken to develop appropriate shielding that will protect the cameras during a five year mission. In this poster, consideration of the effects of space radiation on photon counting observations will be described with the mitigating features of the camera design. An overview of the current camera flight system electronics requirements and design will also be described.

  6. Effect of Display Color on Pilot Performance and Describing Functions

    NASA Technical Reports Server (NTRS)

    Chase, Wendell D.

    1997-01-01

    A study has been conducted with the full-spectrum, calligraphic, computer-generated display system to determine the effect of chromatic content of the visual display upon pilot performance during the landing approach maneuver. This study utilizes a new digital chromatic display system, which has previously been shown to improve the perceived fidelity of out-the-window display scenes, and presents the results of an experiment designed to determine the effects of display color content by the measurement of both vertical approach performance and pilot-describing functions. This method was selected to more fully explore the effects of visual color cues used by the pilot. Two types of landing approaches were made: dynamic and frozen range, with either a landing approach scene or a perspective array display. The landing approach scene was presented with either red runway lights and blue taxiway lights or with the colors reversed, and the perspective array with red lights, blue lights, or red and blue lights combined. The vertical performance measures obtained in this experiment indicated that the pilots performed best with the blue and red/blue displays. and worst with the red displays. The describing-function system analysis showed more variation with the red displays. The crossover frequencies were lowest with the red displays and highest with the combined red/blue displays, which provided the best overall tracking, performance. Describing-function performance measures, vertical performance measures, and pilot opinion support the hypothesis that specific colors in displays can influence the pilots' control characteristics during the final approach.

  7. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  8. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  9. Laser scatter feature of surface defect on apples

    NASA Astrophysics Data System (ADS)

    Rao, Xiuqin; Ying, Yibin; Cen, YiKe; Huang, Haibo

    2006-10-01

    A machine vision system for real-time fruit quality inspection was developed. The system consists of a chamber, a laser projector, a TMS-7DSP CCD camera (PULNIX Inc.), and a computer. A Meteor-II/MC frame grabber (Matrox Graphics Inc.) was inserted into the slot of the computer to grab fruit images. The laser projector and the camera were mounted at the ceiling of the chamber. An apple was put in the chamber, the spot of the laser projector was projected on the surface of the fruit, and an image was grabbed. 2 breed of apples was test, Each apple was imaged twice, one was imaged for the normal surface, and the other for the defect. The red component of the images was used to get the feature of the defect and the sound surface of the fruits. The average value, STD value and comentropy Value of red component of the laser scatter image were analyzed. The Standard Deviation value of red component of normal is more suitable to separate the defect surface from sound surface for the ShuijinFuji apples, but for bintang apples, there is more work need to do to separate the different surface with laser scatter image.

  10. Safety impacts of red light cameras at signalized intersections based on cellular automata models.

    PubMed

    Chai, C; Wong, Y D; Lum, K M

    2015-01-01

    This study applies a simulation technique to evaluate the hypothesis that red light cameras (RLCs) exert important effects on accident risks. Conflict occurrences are generated by simulation and compared at intersections with and without RLCs to assess the impact of RLCs on several conflict types under various traffic conditions. Conflict occurrences are generated through simulating vehicular interactions based on an improved cellular automata (CA) model. The CA model is calibrated and validated against field observations at approaches with and without RLCs. Simulation experiments are conducted for RLC and non-RLC intersections with different geometric layouts and traffic demands to generate conflict occurrences that are analyzed to evaluate the hypothesis that RLCs exert important effects on road safety. The comparison of simulated conflict occurrences show favorable safety impacts of RLCs on crossing conflicts and unfavorable impacts for rear-end conflicts during red/amber phases. Corroborative results are found from broad analysis of accident occurrence. RLCs are found to have a mixed effect on accident risk at signalized intersections: crossing collisions are reduced, whereas rear-end collisions may increase. The specially developed CA model is found to be a feasible safety assessment tool.

  11. Charon's light curves, as observed by New Horizons' Ralph color camera (MVIC) on approach to the Pluto system

    NASA Astrophysics Data System (ADS)

    Howett, C. J. A.; Ennico, K.; Olkin, C. B.; Buie, M. W.; Verbiscer, A. J.; Zangari, A. M.; Parker, A. H.; Reuter, D. C.; Grundy, W. M.; Weaver, H. A.; Young, L. A.; Stern, S. A.

    2017-05-01

    Light curves produced from color observations taken during New Horizons' approach to the Pluto-system by its Multi-spectral Visible Imaging Camera (MVIC, part of the Ralph instrument) are analyzed. Fifty seven observations were analyzed, they were obtained between 9th April and 3rd July 2015, at a phase angle of 14.5° to 15.1°, sub-observer latitude of 51.2 °N to 51.5 °N, and a sub-solar latitude of 41.2°N. MVIC has four color channels; all are discussed for completeness but only two were found to produce reliable light curves: Blue (400-550 nm) and Red (540-700 nm). The other two channels, Near Infrared (780-975 nm) and Methane-Band (860-910 nm), were found to be potentially erroneous and too noisy respectively. The Blue and Red light curves show that Charon's surface is neutral in color, but slightly brighter on its Pluto-facing hemisphere. This is consistent with previous studies made with the Johnson B and V bands, which are at shorter wavelengths than that of the MVIC Blue and Red channel respectively.

  12. Charon's Light Curves, as Observed by New Horizons' Ralph Color Camera (MVIC) on Approach to the Pluto System.

    NASA Technical Reports Server (NTRS)

    Howett, C. J. A.; Ennico, K.; Olkin, C. B.; Buie, M. W.; Verbiscer, A. J.; Zangari, A. M.; Parker, A. H.; Reuter, D. C.; Grundy, W. M.; Weaver, H. A.; hide

    2016-01-01

    Light curves produced from color observations taken during New Horizons approach to the Pluto-system by its Multi-spectral Visible Imaging Camera (MVIC, part of the Ralph instrument) are analyzed. Fifty seven observations were analyzed, they were obtained between 9th April and 3rd July 2015, at a phase angle of 14.5 degrees to 15.1 degrees, sub-observer latitude of 51.2 degrees North to 51.5 degrees North, and a sub-solar latitude of 41.2 degrees North. MVIC has four color channels; all are discussed for completeness but only two were found to produce reliable light curves: Blue (400-550 nm) and Red (540-700 nm). The other two channels, Near Infrared (780-975 nm) and Methane-Band (860-910 nm), were found to be potentially erroneous and too noisy respectively. The Blue and Red light curves show that Charon's surface is neutral in color, but slightly brighter on its Pluto-facing hemisphere. This is consistent with previous studies made with the Johnson B and V bands, which are at shorter wavelengths than that of the MVIC Blue and Red channel respectively.

  13. An investigation into the use of road drainage structures by wildlife in Maryland.

    DOT National Transportation Integrated Search

    2011-08-01

    The research team documented culvert use by 57 species of vertebrates with both infra-red motion detecting digital : game cameras and visual sightings. Species affiliations with culvert characteristics were analyzed using 2 : statistics, Canonical ...

  14. White light phase shifting interferometry and color fringe analysis for the detection of contaminants in water

    NASA Astrophysics Data System (ADS)

    Dubey, Vishesh; Singh, Veena; Ahmad, Azeem; Singh, Gyanendra; Mehta, Dalip Singh

    2016-03-01

    We report white light phase shifting interferometry in conjunction with color fringe analysis for the detection of contaminants in water such as Escherichia coli (E.coli), Campylobacter coli and Bacillus cereus. The experimental setup is based on a common path interferometer using Mirau interferometric objective lens. White light interferograms are recorded using a 3-chip color CCD camera based on prism technology. The 3-chip color camera have lesser color cross talk and better spatial resolution in comparison to single chip CCD camera. A piezo-electric transducer (PZT) phase shifter is fixed with the Mirau objective and they are attached with a conventional microscope. Five phase shifted white light interferograms are recorded by the 3-chip color CCD camera and each phase shifted interferogram is decomposed into the red, green and blue constituent colors, thus making three sets of five phase shifted intererograms for three different colors from a single set of white light interferogram. This makes the system less time consuming and have lesser effect due to surrounding environment. Initially 3D phase maps of the bacteria are reconstructed for red, green and blue wavelengths from these interferograms using MATLAB, from these phase maps we determines the refractive index (RI) of the bacteria. Experimental results of 3D shape measurement and RI at multiple wavelengths will be presented. These results might find applications for detection of contaminants in water without using any chemical processing and fluorescent dyes.

  15. High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.

    PubMed

    Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre

    2017-06-03

    Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.

  16. Demonstration of KHILS two-color IR projection capability

    NASA Astrophysics Data System (ADS)

    Jones, Lawrence E.; Coker, Jason S.; Garbo, Dennis L.; Olson, Eric M.; Murrer, Robert Lee, Jr.; Bergin, Thomas P.; Goldsmith, George C., II; Crow, Dennis R.; Guertin, Andrew W.; Dougherty, Michael; Marler, Thomas M.; Timms, Virgil G.

    1998-07-01

    For more than a decade, there has been considerable discussion about using different IR bands for the detection of low contrast military targets. Theory predicts that a target can have little to no contrast against the background in one IR band while having a discernible signature in another IR band. A significant amount of effort has been invested towards establishing hardware that is capable of simultaneously imaging in two IR bands to take advantage of this phenomenon. Focal plane arrays (FPA) are starting to materialize with this simultaneous two-color imaging capability. The Kinetic Kill Vehicle Hardware-in-the-loop Simulator (KHILS) team of the Air Force Research Laboratory and the Guided Weapons Evaluation Facility (GWEF), both at Eglin AFB, FL, have spent the last 10 years developing the ability to project dynamic IR scenes to imaging IR seekers. Through the Wideband Infrared Scene Projector (WISP) program, the capability to project two simultaneous IR scenes to a dual color seeker has been established at KHILS. WISP utilizes resistor arrays to produce the IR energy. Resistor arrays are not ideal blackbodies. The projection of two IR colors with resistor arrays, therefore, requires two optically coupled arrays. This paper documents the first demonstration of two-color simultaneous projection at KHILS. Agema cameras were used for the measurements. The Agema's HgCdTe detector has responsivity from 4 to 14 microns. A blackbody and two IR filters (MWIR equals 4.2 t 7.4 microns, LWIR equals 7.7 to 13 microns) were used to calibrate the Agema in two bands. Each filter was placed in front of the blackbody one at a time, and the temperature of the blackbody was stepped up in incremental amounts. The output counts from the Agema were recorded at each temperature. This calibration process established the radiance to Agema output count curves for the two bands. The WISP optical system utilizes a dichroic beam combiner to optically couple the two resistor arrays. The transmission path of the beam combiner provided the LWIR (6.75 to 12 microns), while the reflective path produced the MWIR (3 to 6.5 microns). Each resistor array was individually projected into the Agema through the beam combiner at incremental output levels. Once again the Agema's output counts were recorded at each resistor array output level. These projections established the resistor array output to Agema count curves for the MWIR and LWIR resistor arrays. Using the radiance to Agema counts curves, the MWIR and LWIR resistor array output to radiance curves were established. With the calibration curves established, a two-color movie was projected and compared to the generated movie radiance values. By taking care to correctly account for the spectral qualities of the Agema camera, the calibration filters, and the diachroic beam combiner, the projections matched the theoretical calculations. In the near future, a Lockheed- Martin Multiple Quantum Well camera with true two-color IR capability will be tested.

  17. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  18. Miniature infrared hyperspectral imaging sensor for airborne applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-05-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  19. Infrared hyperspectral imaging miniaturized for UAV applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-02-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. Also, an example of how this technology can easily be used to quantify a hydrocarbon gas leak's volume and mass flowrates. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  20. Calibration strategies for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Gaug, Markus; Berge, David; Daniel, Michael; Doro, Michele; Förster, Andreas; Hofmann, Werner; Maccarone, Maria C.; Parsons, Dan; de los Reyes Lopez, Raquel; van Eldik, Christopher

    2014-08-01

    The Central Calibration Facilities workpackage of the Cherenkov Telescope Array (CTA) observatory for very high energy gamma ray astronomy defines the overall calibration strategy of the array, develops dedicated hardware and software for the overall array calibration and coordinates the calibration efforts of the different telescopes. The latter include LED-based light pulsers, and various methods and instruments to achieve a calibration of the overall optical throughput. On the array level, methods for the inter-telescope calibration and the absolute calibration of the entire observatory are being developed. Additionally, the atmosphere above the telescopes, used as a calorimeter, will be monitored constantly with state-of-the-art instruments to obtain a full molecular and aerosol profile up to the stratosphere. The aim is to provide a maximal uncertainty of 10% on the reconstructed energy-scale, obtained through various independent methods. Different types of LIDAR in combination with all-sky-cameras will provide the observatory with an online, intelligent scheduling system, which, if the sky is partially covered by clouds, gives preference to sources observable under good atmospheric conditions. Wide-field optical telescopes and Raman Lidars will provide online information about the height-resolved atmospheric extinction, throughout the field-of-view of the cameras, allowing for the correction of the reconstructed energy of each gamma-ray event. The aim is to maximize the duty cycle of the observatory, in terms of usable data, while reducing the dead time introduced by calibration activities to an absolute minimum.

  1. BRDF-dependent accuracy of array-projection-based 3D sensors.

    PubMed

    Heist, Stefan; Kühmstedt, Peter; Tünnermann, Andreas; Notni, Gunther

    2017-03-10

    In order to perform high-speed three-dimensional (3D) shape measurements with structured light systems, high-speed projectors are required. One possibility is an array projector, which allows pattern projection at several tens of kilohertz by switching on and off the LEDs of various slide projectors. The different projection centers require a separate analysis, as the intensity received by the cameras depends on the projection direction and the object's bidirectional reflectance distribution function (BRDF). In this contribution, we investigate the BRDF-dependent errors of array-projection-based 3D sensors and propose an error compensation process.

  2. Wetlands of the Gulf Coast

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This set of images from the Multi-angle Imaging SpectroRadiometer highlights coastal areas of four states along the Gulf of Mexico: Louisiana, Mississippi, Alabama and part of the Florida panhandle. The images were acquired on October 15, 2001 (Terra orbit 9718)and represent an area of 345 kilometers x 315 kilometers.

    The two smaller images on the right are (top) a natural color view comprised of red, green, and blue band data from MISR's nadir(vertical-viewing) camera, and (bottom) a false-color view comprised of near-infrared, red, and blue band data from the same camera. The predominantly red color of the false-color image is due to the presence of vegetation, which is bright at near-infrared wavelengths. Cities appear as grey patches, with New Orleans visible at the southern edge of Lake Pontchartrain, along the left-hand side of the images. The Lake Pontchartrain Bridge runs approximately north-south across the middle of the lake. The distinctive shape of the Mississippi River Delta can be seen to the southeast of New Orleans. Other coastal cities are visible east of the Mississippi, including Biloxi, Mobile and Pensacola.

    The large image is similar to the true-color nadir view, except that red band data from the 60-degree backward-looking camera has been substituted into the red channel; the blue and green data from the nadir camera have been preserved. In this visualization, green hues appear somewhat subdued, and a number of areas with a reddish color are present, particularly near the mouths of the Mississippi, Pascagoula, Mobile-Tensaw, and Escambia Rivers. Here, the red color is highlighting differences in surface texture. This combination of angular and spectral information differentiates areas with aquatic vegetation associated with poorly drained bottom lands, marshes, and/or estuaries from the surrounding surface vegetation. These wetland regions are not as well differentiated in the conventional nadir views.

    Variations in ocean color are apparent in all three views, and represent the outflow of suspended sediment from the seabed shelf to the open waters of the Gulf of Mexico. Major features include the Mississippi Delta, where large amounts of land-derived sediments have been deposited in shallow coastal waters. These deltaic environments form a complex, interconnected web of estuarine channels and extensive coastal wetlands that provide important habitat for fisheries. The city of New Orleans is prone to flooding, with about 45% of the metropolitan core situated at or below sea level. The city is protected by levees, but the wetlands which also function as a buffer from storm surges have been disappearing.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  3. Pulse Mode Light Sensing Using Four-Layer Semiconductor Structures and Their Application in Neural Networks

    DTIC Science & Technology

    2008-12-01

    However, the visual sensation was found to occur in retinal areas distant from the implant [10]. Since the current generated under normal light...electronics could limit the use of the microphotodetector array in retinal stimulation. Alternatively, a thin array, containing 64 electrodes...that passes through the skull and skin. Outside the skull, the device is similar to the retinal stimulators, with a television camera mounted on

  4. A Daytime Aspect Camera for Balloon Altitudes

    NASA Technical Reports Server (NTRS)

    Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.; Six, N. Frank (Technical Monitor)

    2001-01-01

    We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600-1000 nm region of the spectrum, successfully provided daytime aspect information of approximately 10 arcsecond resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models, but the daytime stellar magnitude limit was lower than expected due to dispersion of red light by the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.

  5. Toward Active Control of Noise from Hot Supersonic Jets

    DTIC Science & Technology

    2012-05-14

    was developed that would allow for easy data sharing among the research teams. This format includes the acoustic data along with all calibration ...SUPERSONIC | QUARTERLY RPT. 3 ■ 1 i; ’XZ. "• Tff . w w i — r i (a) Far-Field Array Calibration (b) MHz Rate PIV Camera Setup Figure... Plenoptic camera is a similar setup to determine 3-D motion of the flow using a thick light sheet. 2.3 Update on CFD Progress In the previous interim

  6. -V2 plane on the Hubble Space Telescope

    NASA Image and Video Library

    2002-03-03

    STS109-E-5104 (3 March 2002) --- The Hubble Space Telescope is seen in the cargo bay of the Space Shuttle Columbia. Each present set of solar array panels will be replaced during one of the space walks planned for the coming week. The crew aimed various cameras, including the digital still camera used for this frame, out the shuttle's aft flight deck windows to take a series of survey type photos, the first close-up images of the telescope since December of 1999.

  7. -V2 plane on the Hubble Space Telescope

    NASA Image and Video Library

    2002-03-03

    STS109-E-5102 (3 March 2002) --- The Hubble Space Telescope is seen in the cargo bay of the Space Shuttle Columbia. Each present set of solar array panels will be replaced during one of the space walks planned for the coming week. The crew aimed various cameras, including the digital still camera used for this frame, out the shuttle's aft flight deck windows to take a series of survey type photos, the first close-up images of the telescope since December of 1999.

  8. The Road To The Objective Force. Armaments for the Army Transformation

    DTIC Science & Technology

    2001-06-18

    Vehicle Fire Support Vehicle •TOW 2B Anti-Tank Capability Under Armor •Detection of NBC Hazards Mortar Carrier •Dismounted M121 120mm MRT Initially...engaged from under armor M6 Launchers (x4) Staring Array Thermal Sight Height reduction for air transport Day Camera Target Acq Sight Armament Remote...PM BCT ANTI-TANK GUIDED MISSILE VEHICLE • TOWII • ITAS (Raytheon) - 2 Missiles • IBAS Day Camera • Missile is Remotely Fired Under Armor • M6 Smoke

  9. Laser speckle strain and deformation sensor using linear array image cross-correlation method for specifically arranged triple-beam triple-camera configuration

    NASA Technical Reports Server (NTRS)

    Sarrafzadeh-Khoee, Adel K. (Inventor)

    2000-01-01

    The invention provides a method of triple-beam and triple-sensor in a laser speckle strain/deformation measurement system. The triple-beam/triple-camera configuration combined with sequential timing of laser beam shutters is capable of providing indications of surface strain and structure deformations. The strain and deformation quantities, the four variables of surface strain, in-plane displacement, out-of-plane displacement and tilt, are determined in closed form solutions.

  10. Reticle stage based linear dosimeter

    DOEpatents

    Berger, Kurt W [Livermore, CA

    2007-03-27

    A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.

  11. Reticle stage based linear dosimeter

    DOEpatents

    Berger, Kurt W.

    2005-06-14

    A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.

  12. Application of Plenoptic PIV for 3D Velocity Measurements Over Roughness Elements in a Refractive Index Matched Facility

    NASA Astrophysics Data System (ADS)

    Thurow, Brian; Johnson, Kyle; Kim, Taehoon; Blois, Gianluca; Best, Jim; Christensen, Ken

    2014-11-01

    The application of Plenoptic PIV in a Refractive Index Matched (RIM) facility housed at Illinois is presented. Plenoptic PIV is an emerging 3D diagnostic that exploits the light-field imaging capabilities of a plenoptic camera. Plenoptic cameras utilize a microlens array to measure the position and angle of light rays captured by the camera. 3D/3C velocity fields are determined through application of the MART algorithm for volume reconstruction and a conventional 3D cross-correlation PIV algorithm. The RIM facility is a recirculating tunnel with a 62.5% aqueous solution of sodium iodide used as the working fluid. Its resulting index of 1.49 is equal to that of acrylic. Plenoptic PIV was used to measure the 3D velocity field of a turbulent boundary layer flow over a smooth wall, a single wall-mounted hemisphere and a full array of hemispheres (i.e. a rough wall) with a k/ δ ~ 4.6. Preliminary time averaged and instantaneous 3D velocity fields will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1235726.

  13. Design of a 2-mm Wavelength KIDs Prototype Camera for the Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Velázquez, M.; Ferrusca, D.; Castillo-Dominguez, E.; Ibarra-Medel, E.; Ventura, S.; Gómez-Rivera, V.; Hughes, D.; Aretxaga, I.; Grant, W.; Doyle, S.; Mauskopf, P.

    2016-08-01

    A new camera is being developed for the Large Millimeter Telescope (Sierra Negra, México) by an international collaboration with the University of Massachusetts, the University of Cardiff, and Arizona State University. The camera is based on kinetic inductance detectors (KIDs), a very promising technology due to their sensitivity and especially, their compatibility with frequency domain multiplexing at microwave frequencies allowing large format arrays, in comparison with other detection technologies for mm-wavelength astronomy. The instrument will have a 100 pixels array of KIDs to image the 2-mm wavelength band and is designed for closed cycle operation using a pulse tube cryocooler along with a three-stage sub-kelvin 3He cooler to provide a 250 mK detector stage. RF cabling is used to readout the detectors from room temperature to 250 mK focal plane, and the amplification stage is achieved with a low-noise amplifier operating at 4 K. The readout electronics will be based on open-source reconfigurable open architecture computing hardware in order to perform real-time microwave transmission measurements and monitoring the resonance frequency of each detector, as well as the detection process.

  14. Particle displacement tracking applied to air flows

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.

  15. Multi-band infrared camera systems

    NASA Astrophysics Data System (ADS)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  16. Inauguration and first light of the GCT-M prototype for the Cherenkov telescope array

    NASA Astrophysics Data System (ADS)

    Watson, J. J.; De Franco, A.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jegouzo, I.; Jogler, T.; Kraus, M.; Lapington, J. S.; Laporte, P.; Lefaucheur, J.; Markoff, S.; Melse, T.; Mohrmann, L.; Molyneux, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayède, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Vink, J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-01-01

    The Gamma-ray Cherenkov Telescope (GCT) is a candidate for the Small Size Telescopes (SSTs) of the Cherenkov Telescope Array (CTA). Its purpose is to extend the sensitivity of CTA to gamma-ray energies reaching 300 TeV. Its dual-mirror optical design and curved focal plane enables the use of a compact camera of 0.4 m diameter, while achieving a field of view of above 8 degrees. Through the use of the digitising TARGET ASICs, the Cherenkov flash is sampled once per nanosecond contin-uously and then digitised when triggering conditions are met within the analogue outputs of the photosensors. Entire waveforms (typically covering 96 ns) for all 2048 pixels are then stored for analysis, allowing for a broad spectrum of investigations to be performed on the data. Two prototypes of the GCT camera are under development, with differing photosensors: Multi-Anode Photomultipliers (MAPMs) and Silicon Photomultipliers (SiPMs). During November 2015, the GCT MAPM (GCT-M) prototype camera was integrated onto the GCT structure at the Observatoire de Paris-Meudon, where it observed the first Cherenkov light detected by a prototype instrument for CTA.

  17. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    NASA Astrophysics Data System (ADS)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  18. 2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup

    NASA Astrophysics Data System (ADS)

    Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.

    2017-10-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  19. Mini gamma camera, camera system and method of use

    DOEpatents

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  20. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

Top