Sample records for large format camera

  1. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  2. Technology development: Future use of NASA's large format camera is uncertain

    NASA Astrophysics Data System (ADS)

    Rey, Charles F.; Fliegel, Ilene H.; Rohner, Karl A.

    1990-06-01

    The Large Format Camera, developed as a project to verify an engineering concept or design, has been flown only once, in 1984, on the shuttle Challenger. Since this flight, the camera has been in storage. NASA had expected that, following the camera's successful demonstration, other government agencies or private companies with special interests in photographic applications would absorb the costs for further flights using the Large Format Camera. But, because shuttle transportation costs for the Large Format Camera were estimated to be approximately $20 million (in 1987 dollars) per flight and the market for selling Large Format Camera products was limited, NASA was not successful in interesting other agencies or private companies in paying the costs. Using the camera on the space station does not appear to be a realistic alternative. Using the camera aboard NASA's Earth Resources Research (ER-2) aircraft may be feasible. Until the final disposition of the camera is decided, NASA has taken actions to protect it from environmental deterioration. The Government Accounting Office (GAO) recommends that the NASA Administrator should consider, first, using the camera on an aircraft such as the ER-2. NASA plans to solicit the private sector for expressions of interest in such use of the camera, at no cost to the government, and will be guided by the private sector response. Second, GAO recommends that if aircraft use is determined to be infeasible, NASA should consider transferring the camera to a museum, such as the National Air and Space Museum.

  3. Feasibility evaluation and study of adapting the attitude reference system to the Orbiter camera payload system's large format camera

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.

  4. LWIR NUC using an uncooled microbolometer camera

    NASA Astrophysics Data System (ADS)

    Laveigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian; McHugh, Steve

    2010-04-01

    Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector. Ideally, NUC will be performed in the same band in which the scene projector will be used. Cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, however, cooled large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Santa Barbara Infrared, Inc. reports progress on a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution are the main difficulties. A discussion of processes developed to mitigate these issues follows.

  5. Meteor Film Recording with Digital Film Cameras with large CMOS Sensors

    NASA Astrophysics Data System (ADS)

    Slansky, P. C.

    2016-12-01

    In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.

  6. Enhanced LWIR NUC using an uncooled microbolometer camera

    NASA Astrophysics Data System (ADS)

    LaVeigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian

    2011-06-01

    Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector, and the best NUC is performed in the band of interest for the sensor being tested. While cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, similar cooled, large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Some of these challenges were discussed in a previous paper. In this discussion, we report results from a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution were the main problems, and have been solved by the implementation of several compensation strategies as well as hardware used to stabilize the camera. In addition, other processes have been developed to allow iterative improvement as well as supporting changes of the post-NUC lookup table without requiring re-collection of the pre-NUC data with the new LUT in use.

  7. Accuracy, resolution, and cost comparisons between small format and mapping cameras for environmental mapping

    NASA Technical Reports Server (NTRS)

    Clegg, R. H.; Scherz, J. P.

    1975-01-01

    Successful aerial photography depends on aerial cameras providing acceptable photographs within cost restrictions of the job. For topographic mapping where ultimate accuracy is required only large format mapping cameras will suffice. For mapping environmental patterns of vegetation, soils, or water pollution, 9-inch cameras often exceed accuracy and cost requirements, and small formats may be better. In choosing the best camera for environmental mapping, relative capabilities and costs must be understood. This study compares resolution, photo interpretation potential, metric accuracy, and cost of 9-inch, 70mm, and 35mm cameras for obtaining simultaneous color and color infrared photography for environmental mapping purposes.

  8. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    NASA Astrophysics Data System (ADS)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  9. Performance Characteristics For The Orbiter Camera Payload System's Large Format Camera (LFC)

    NASA Astrophysics Data System (ADS)

    MoIIberg, Bernard H.

    1981-11-01

    The Orbiter Camera Payload System, the OCPS, is an integrated photographic system which is carried into Earth orbit as a payload in the Shuttle Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC) which is a precision wide-angle cartographic instrument that is capable of produc-ing high resolution stereophotography of great geometric fidelity in multiple base to height ratios. The primary design objective for the LFC was to maximize all system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment.

  10. Rapid orthophoto development system.

    DOT National Transportation Integrated Search

    2013-06-01

    The DMC system procured in the project represented state-of-the-art, large-format digital aerial camera systems at the start of : project. DMC is based on the frame camera model, and to achieve large ground coverage with high spatial resolution, the ...

  11. Feasibility study for the application of the large format camera as a payload for the Orbiter program

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The large format camera (LFC) designed as a 30 cm focal length cartographic camera system that employs forward motion compensation in order to achieve the full image resolution provided by its 80 degree field angle lens is described. The feasibility of application of the current LFC design to deployment in the orbiter program as the Orbiter Camera Payload System was assessed and the changes that are necessary to meet such a requirement are discussed. Current design and any proposed design changes were evaluated relative to possible future deployment of the LFC on a free flyer vehicle or in a WB-57F. Preliminary mission interface requirements for the LFC are given.

  12. Development of two-framing camera with large format and ultrahigh speed

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaoguo; Wang, Yuan; Wang, Yi

    2012-10-01

    High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.

  13. Orbiter Camera Payload System

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Components for an orbiting camera payload system (OCPS) include the large format camera (LFC), a gas supply assembly, and ground test, handling, and calibration hardware. The LFC, a high resolution large format photogrammetric camera for use in the cargo bay of the space transport system, is also adaptable to use on an RB-57 aircraft or on a free flyer satellite. Carrying 4000 feet of film, the LFC is usable over the visible to near IR, at V/h rates of from 11 to 41 milliradians per second, overlap of 10, 60, 70 or 80 percent and exposure times of from 4 to 32 milliseconds. With a 12 inch focal length it produces a 9 by 18 inch format (long dimension in line of flight) with full format low contrast resolution of 88 lines per millimeter (AWAR), full format distortion of less than 14 microns and a complement of 45 Reseau marks and 12 fiducial marks. Weight of the OCPS as supplied, fully loaded is 944 pounds and power dissipation is 273 watts average when in operation, 95 watts in standby. The LFC contains an internal exposure sensor, or will respond to external command. It is able to photograph starfields for inflight calibration upon command.

  14. Preliminary investigation of Large Format Camera photography utility in soil mapping and related agricultural applications

    NASA Technical Reports Server (NTRS)

    Pelletier, R. E.; Hudnall, W. H.

    1987-01-01

    The use of Space Shuttle Large Format Camera (LFC) color, IR/color, and B&W images in large-scale soil mapping is discussed and illustrated with sample photographs from STS 41-6 (October 1984). Consideration is given to the characteristics of the film types used; the photographic scales available; geometric and stereoscopic factors; and image interpretation and classification for soil-type mapping (detecting both sharp and gradual boundaries), soil parent material topographic and hydrologic assessment, natural-resources inventory, crop-type identification, and stress analysis. It is suggested that LFC photography can play an important role, filling the gap between aerial and satellite remote sensing.

  15. SHUTTLE - PAYLOADS (STS-41G) - KSC

    NASA Image and Video Library

    1984-10-05

    Payload canister transporter in Vertical Processing Facility Clean Room loaded with Earth Radiation Budget Experiment (ERBS), Large Format Camera (LFC), and Orbital Reservicing System (ORS) for STS-41G Mission. 1. STS-41G - EXPERIMENTS 2. CAMERAS - LFC KSC, FL Also available in 4x5 CN

  16. STS-36 Mission Specialist Hilmers with AEROLINHOF camera on aft flight deck

    NASA Image and Video Library

    1990-03-03

    STS-36 Mission Specialist (MS) David C. Hilmers points the large-format AEROLINHOF camera out overhead window W7 on the aft flight deck of Atlantis, Orbiter Vehicle (OV) 104. Hilmers records Earth imagery using the camera. Hilmers and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.

  17. The Orbiter camera payload system's large-format camera and attitude reference system

    NASA Technical Reports Server (NTRS)

    Schardt, B. B.; Mollberg, B. H.

    1985-01-01

    The Orbiter camera payload system (OCPS) is an integrated photographic system carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a large-format camera (LFC), a precision wide-angle cartographic instrument capable of producing high-resolution stereophotography of great geometric fidelity in multiple base-to-height ratios. A secondary and supporting system to the LFC is the attitude reference system (ARS), a dual-lens stellar camera array (SCA) and camera support structure. The SCA is a 70 mm film system that is rigidly mounted to the LFC lens support structure and, through the simultaneous acquisition of two star fields with each earth viewing LFC frame, makes it possible to precisely determine the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high-precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment. The full OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on Oct. 5, 1984, as a major payload aboard the STS-41G mission.

  18. In-flight photogrammetric camera calibration and validation via complementary lidar

    NASA Astrophysics Data System (ADS)

    Gneeniss, A. S.; Mills, J. P.; Miller, P. E.

    2015-02-01

    This research assumes lidar as a reference dataset against which in-flight camera system calibration and validation can be performed. The methodology utilises a robust least squares surface matching algorithm to align a dense network of photogrammetric points to the lidar reference surface, allowing for the automatic extraction of so-called lidar control points (LCPs). Adjustment of the photogrammetric data is then repeated using the extracted LCPs in a self-calibrating bundle adjustment with additional parameters. This methodology was tested using two different photogrammetric datasets, a Microsoft UltraCamX large format camera and an Applanix DSS322 medium format camera. Systematic sensitivity testing explored the influence of the number and weighting of LCPs. For both camera blocks it was found that when the number of control points increase, the accuracy improves regardless of point weighting. The calibration results were compared with those obtained using ground control points, with good agreement found between the two.

  19. VizieR Online Data Catalog: BzJK observations around radio galaxies (Galametz+, 2009)

    NASA Astrophysics Data System (ADS)

    Galametz, A.; De Breuck, C.; Vernet, J.; Stern, D.; Rettura, A.; Marmo, C.; Omont, A.; Allen, M.; Seymour, N.

    2010-02-01

    We imaged the two targets using the Bessel B-band filter of the Large Format Camera (LFC) on the Palomar 5m Hale Telescope. We imaged the radio galaxy fields using the z-band filter of Palomar/LFC. In February 2005, we observed 7C 1751+6809 for 60-min under photometric conditions. In August 2005, we observed 7C 1756+6520 for 135-min but in non-photometric conditions. The tables provide the B, z, J and Ks magnitudes and coordinates of the pBzK* galaxies (red passively evolving candidates selected by BzK=(z-K)-(B-z)<-0.2 and (z-K)>2.2) for both fields. The B and z bands were obtained using the Large Format Camera (LFC) on the Palomar 5m Hale Telescope, and the J and Ks bands using Wide-field Infrared Camera (WIRCAM) of the Canada-France-Hawaii Telescope (CFHT). (2 data files).

  20. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  1. Software for minimalistic data management in large camera trap studies

    PubMed Central

    Krishnappa, Yathin S.; Turner, Wendy C.

    2014-01-01

    The use of camera traps is now widespread and their importance in wildlife studies well understood. Camera trap studies can produce millions of photographs and there is a need for software to help manage photographs efficiently. In this paper, we describe a software system that was built to successfully manage a large behavioral camera trap study that produced more than a million photographs. We describe the software architecture and the design decisions that shaped the evolution of the program over the study’s three year period. The software system has the ability to automatically extract metadata from images, and add customized metadata to the images in a standardized format. The software system can be installed as a standalone application on popular operating systems. It is minimalistic, scalable and extendable so that it can be used by small teams or individual researchers for a broad variety of camera trap studies. PMID:25110471

  2. Using DSLR cameras in digital holography

    NASA Astrophysics Data System (ADS)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  3. Eruptive patterns and structure of Isla Fernandina, Galapagos Islands, from SPOT-1 HRV and large format camera images

    NASA Technical Reports Server (NTRS)

    Munro, Duncan C.; Mouginis-Mark, Peter J.

    1990-01-01

    SPOT-1 HRV, and large format-camera images were used to investigate the distribution and structure of erupted materials on Isla Fernandina, Galapagos Islands. Maps of lava flows, fissures, cones and topography derived from these data allow the first study of the entire subaerial segment of this geographically remote and ecologically sensitive volcano. No significant departure from a uniform distribution of erupted lava with azimuth can be detected. Short (less than 4 km) lava flows commonly have their source in the summit region and longer (greater than 8 km) lava flows originate from vents at lower elevations. Catastrophic landslides are proposed as a possible explanation for the asymmetry of the coastline with respect to the caldera.

  4. Geometric Calibration and Validation of Ultracam Aerial Sensors

    NASA Astrophysics Data System (ADS)

    Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried

    2016-03-01

    We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.

  5. End-on soft x ray imaging of Field-Reversed Configurations (FRCs) on the Field-Reversal-C (FRX-C)/Large Scale Modification (LSM) experiment

    NASA Astrophysics Data System (ADS)

    Taggart, D. P.; Gribble, R. J.; Bailey, A. D., III; Sugimoto, S.

    Recently, a prototype soft x ray pinhole camera was fielded on FRX-C/LSM at Los Alamos and TRX at Spectra Technology. The soft x ray FRC images obtained using this camera stand out in high contrast to their surroundings. It was particularly useful for studying the FRC during and shortly after formation when, at certain operating conditions, flute-like structures at the edge and internal structures of the FRC were observed which other diagnostics could not resolve. Building on this early experience, a new soft x ray pinhole camera was installed on FRX-C/LSM, which permits more rapid data acquisition and briefer exposures. It will be used to continue studying FRC formation and to look for internal structure later in time which could be a signature of instability. The initial operation of this camera is summarized.

  6. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  7. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  8. Payload canister transporter in VPF clean room

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Payload canister transporter in Vertical Processing Facility (VPF) Clean Room loaded with Earth Radiation Budget Satellite (ERBS), Large Format Camera (LFC) and Orbital Refueling System (ORS) for STS-41G mission.

  9. Mission Report on the Orbiter Camera Payload System (OCPS) Large Format Camera (LFC) and Attitude Reference System (ARS)

    NASA Technical Reports Server (NTRS)

    Mollberg, Bernard H.; Schardt, Bruton B.

    1988-01-01

    The Orbiter Camera Payload System (OCPS) is an integrated photographic system which is carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC), a precision wide-angle cartographic instrument that is capable of producing high resolution stereo photography of great geometric fidelity in multiple base-to-height (B/H) ratios. A secondary, supporting system to the LFC is the Attitude Reference System (ARS), which is a dual lens Stellar Camera Array (SCA) and camera support structure. The SCA is a 70-mm film system which is rigidly mounted to the LFC lens support structure and which, through the simultaneous acquisition of two star fields with each earth-viewing LFC frame, makes it possible to determine precisely the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with Shuttle launch conditions and the on-orbit environment. The full-up OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on October 5, 1984, as a major payload aboard mission STS 41-G. This report documents the system design, the ground testing, the flight configuration, and an analysis of the results obtained during the Challenger mission STS 41-G.

  10. Prototypic Development and Evaluation of a Medium Format Metric Camera

    NASA Astrophysics Data System (ADS)

    Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.

    2018-05-01

    Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  11. ARC-2009-ACD09-0218-005

    NASA Image and Video Library

    2009-10-06

    NASA Conducts Airborne Science Aboard Zeppelin Airship: equipped with two imaging instruments enabling remote sensing and atmospheric science measurements not previously practical. Hyperspectral imager and large format camera mounted inside the Zeppelin nose fairing.

  12. Electronographic cameras for space astronomy.

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1972-01-01

    Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.

  13. The MKID Camera

    NASA Astrophysics Data System (ADS)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  14. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  15. Mach stem formation in outdoor measurements of acoustic shocks.

    PubMed

    Leete, Kevin M; Gee, Kent L; Neilsen, Tracianne B; Truscott, Tadd T

    2015-12-01

    Mach stem formation during outdoor acoustic shock propagation is investigated using spherical oxyacetylene balloons exploded above pavement. The location of the transition point from regular to irregular reflection and the path of the triple point are experimentally resolved using microphone arrays and a high-speed camera. The transition point falls between recent analytical work for weak irregular reflections and an empirical relationship derived from large explosions.

  16. Microprocessor-controlled wide-range streak camera

    NASA Astrophysics Data System (ADS)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  17. Large-Format Dual-Counter Pixelated X-Ray Detector Platform: Phase II Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Adam; Williams, George; Huntington, Andrew

    2016-10-10

    Within the program, a Voxtel led team demonstrated both prototype (48 x 48, 130-μm pitch, VX-798) and full-format (192 x 192, 100-μm pitch, VX-810) versions of a high-dynamic-range, x-ray photon-counting (HDR-XPC) sensor. Within the program the following tasks were completed: 1) integration and evaluation of the VX-798 prototype camera at the Advanced Photon Source beamline at Argonne National Labs; 2) the design, simulation, and fabrication of the full-format VX-810 ROIC was completed; 3) fabrication of thick, fully depleted silicon photodiodes optimized for x-ray photon collection; 4) hybridization of the VX-810 ROIC to the photodiode array in the creation of themore » optically sensitive FPA (FPA), and 4) development of an evaluation camera to enable electrical and optical characterization of the sensor.« less

  18. Large, high resolution integrating TV sensor for astronomical appliations

    NASA Technical Reports Server (NTRS)

    Spitzer, L. J.

    1977-01-01

    A magnetically focused SEC tube developed for photometric applications is described. Efforts to design a 70 mm version of the tube which meets the ST f/24 camera requirements of the space telescope are discussed. The photometric accuracy of the 70 mm tube is executed to equal that of the previously developed 35 mm tube. The tube meets the criterion of 50 percent response at 20 cycles/mm in the central region of the format, and, with the removal of the remaining magnetic parts, this spatial frequency is expected over almost all of the format. Since the ST f/24 camera requires sensitivity in the red as well as the ultraviolet and visible spectra, attempts were made to develop tubes with this ability. It was found that it may be necessary to choose between red and u.v. sensitivity and tradeoff red sensitivity for low background. Results of environmental tests indicate no substantive problems in utilizing it in a flight camera system that will meet the space shuttle launch requirements.

  19. Microprocessor-controlled, wide-range streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amy E. Lewis, Craig Hollabaugh

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less

  20. Earth Observation taken during the 41G mission

    NASA Image and Video Library

    2009-06-25

    41G-120-056 (October 1984) --- Parts of Israel, Lebanon, Palestine, Syria and Jordan and part of the Mediterranean Sea are seen in this nearly-vertical, large format camera's view from the Earth-orbiting Space Shuttle Challenger. The Sea of Galilee is at center frame and the Dead Sea at bottom center. The frame's center coordinates are 32.5 degrees north latitude and 35.5 degrees east longitude. A Linhof camera, using 4" x 5" film, was used to expose the frame through one of the windows on Challenger's aft flight deck.

  1. Large format geiger-mode avalanche photodiode LADAR camera

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison

    2013-05-01

    Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.

  2. Photographic zoom fisheye lens design for DSLR cameras

    NASA Astrophysics Data System (ADS)

    Yan, Yufeng; Sasian, Jose

    2017-09-01

    Photographic fisheye lenses with fixed focal length for cameras with different sensor formats have been well developed for decades. However, photographic fisheye lenses with variable focal length are rare on the market due in part to the greater design difficulty. This paper presents a large aperture zoom fisheye lens for DSLR cameras that produces both circular and diagonal fisheye imaging for 35-mm sensors and diagonal fisheye imaging for APS-C sensors. The history and optical characteristics of fisheye lenses are briefly reviewed. Then, a 9.2- to 16.1-mm F/2.8 to F/3.5 zoom fisheye lens design is presented, including the design approach and aberration control. Image quality and tolerance performance analysis for this lens are also presented.

  3. Plasma formation in water vapour layers in high conductivity liquids

    NASA Astrophysics Data System (ADS)

    Kelsey, C. P.; Schaper, L.; Stalder, K. R.; Graham, W. G.

    2011-10-01

    The vapour layer development stage of relatively low voltage plasmas in conducting solutions has already been well explored. The nature of the discharges formed within the vapour layer however is still largely unexplored. Here we examine the nature of such discharges through a combination of fast imaging and spatially, temporally resolved spectroscopy and electrical characterisation. The experimental setup used is a pin-to-plate discharge configuration with a -350V, 200 μs pulse applied at a repetition rate of 2Hz. A lens, followed by beam splitter allows beams to one Andor ICCD camera to capture images of the plasma emission with a second camera at the exit of a high resolution spectrometer. Through synchronization of the camera images at specified times after plasma ignition (as determined from current-voltage characteristics) they can be correlated with the spectra features. Initial measurements reveal two apparently different plasma formations. Stark broadening of the hydrogen Balmer beta line indicate electron densities of 3 to 5 ×1020 m-3 for plasmas produced early in the voltage pulse and an order of magnitude less for the later plasmas. The vapour layer development stage of relatively low voltage plasmas in conducting solutions has already been well explored. The nature of the discharges formed within the vapour layer however is still largely unexplored. Here we examine the nature of such discharges through a combination of fast imaging and spatially, temporally resolved spectroscopy and electrical characterisation. The experimental setup used is a pin-to-plate discharge configuration with a -350V, 200 μs pulse applied at a repetition rate of 2Hz. A lens, followed by beam splitter allows beams to one Andor ICCD camera to capture images of the plasma emission with a second camera at the exit of a high resolution spectrometer. Through synchronization of the camera images at specified times after plasma ignition (as determined from current-voltage characteristics) they can be correlated with the spectra features. Initial measurements reveal two apparently different plasma formations. Stark broadening of the hydrogen Balmer beta line indicate electron densities of 3 to 5 ×1020 m-3 for plasmas produced early in the voltage pulse and an order of magnitude less for the later plasmas. Colin Kelsey is supported by a DEL NI PhD studentship.

  4. Cranz-Schardin camera with a large working distance for the observation of small scale high-speed flows.

    PubMed

    Skupsch, C; Chaves, H; Brücker, C

    2011-08-01

    The Cranz-Schardin camera utilizes a Q-switched Nd:YAG laser and four single CCD cameras. Light pulse energy in the range of 25 mJ and pulse duration of about 5 ns is provided by the laser. The laser light is converted to incoherent light by Rhodamine-B fluorescence dye in a cuvette. The laser beam coherence is intentionally broken in order to avoid speckle. Four light fibers collect the fluorescence light and are used for illumination. Different light fiber lengths enable a delay of illumination between consecutive images. The chosen interframe time is 25 ns, corresponding to 40 × 10(6) frames per second. Exemplarily, the camera is applied to observe the bow shock in front of a water jet, propagating in air at supersonic speed. The initial phase of the formation of a jet structure is recorded.

  5. Community cyberinfrastructure for Advanced Microbial Ecology Research and Analysis: the CAMERA resource

    PubMed Central

    Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E.; Ellisman, Mark; Grethe, Jeffrey; Wooley, John

    2011-01-01

    The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data. PMID:21045053

  6. Community cyberinfrastructure for Advanced Microbial Ecology Research and Analysis: the CAMERA resource.

    PubMed

    Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E; Ellisman, Mark; Grethe, Jeffrey; Wooley, John

    2011-01-01

    The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data.

  7. Space acquired photography

    USGS Publications Warehouse

    ,

    2008-01-01

    Interested in a photograph of the first space walk by an American astronaut, or the first photograph from space of a solar eclipse? Or maybe your interest is in a specific geologic, oceanic, or meteorological phenomenon? The U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center is making photographs of the Earth taken from space available for search, download, and ordering. These photographs were taken by Gemini mission astronauts with handheld cameras or by the Large Format Camera that flew on space shuttle Challenger in October 1984. Space photographs are distributed by EROS only as high-resolution scanned or medium-resolution digital products.

  8. Tectonic geomorphology of the Andes with SIR-A and SIR-B

    NASA Technical Reports Server (NTRS)

    Bloom, Arthur L.; Fielding, Eric J.

    1986-01-01

    Data takes from SIR-A and SIR-B (Shuttle Imaging Radar) crossed all of the principal geomorphic provinces of the central Andes between 17 and 34 S latitude. In conjunction with Thematic Mapping images and photographs from hand-held cameras as well as from the Large Format Camera that was flown with SIR-B, the radar images give an excellent sampling of Andean geomorphology. In particular, the radar images show new details of volcanic rocks and landforms of late Cenozoic age in the Puna, and the exhumed surfaces of tilted blocks of Precambrian crystalline basement in the Sierras Pampeanas.

  9. Large Scale Structure From Motion for Autonomous Underwater Vehicle Surveys

    DTIC Science & Technology

    2004-09-01

    Govern the Formation of Multiple Images of a Scene and Some of Their Applications. MIT Press, 2001. [26] 0. Faugeras and S. Maybank . Motion from point...Machine Vision Conference, volume 1, pages 384-393, September 2002. [69] S. Maybank and 0. Faugeras. A theory of self-calibration of a moving camera

  10. Land cover/use classification of Cairns, Queensland, Australia: A remote sensing study involving the conjunctive use of the airborne imaging spectrometer, the large format camera and the thematic mapper simulator

    NASA Technical Reports Server (NTRS)

    Heric, Matthew; Cox, William; Gordon, Daniel K.

    1987-01-01

    In an attempt to improve the land cover/use classification accuracy obtainable from remotely sensed multispectral imagery, Airborne Imaging Spectrometer-1 (AIS-1) images were analyzed in conjunction with Thematic Mapper Simulator (NS001) Large Format Camera color infrared photography and black and white aerial photography. Specific portions of the combined data set were registered and used for classification. Following this procedure, the resulting derived data was tested using an overall accuracy assessment method. Precise photogrammetric 2D-3D-2D geometric modeling techniques is not the basis for this study. Instead, the discussion exposes resultant spectral findings from the image-to-image registrations. Problems associated with the AIS-1 TMS integration are considered, and useful applications of the imagery combination are presented. More advanced methodologies for imagery integration are needed if multisystem data sets are to be utilized fully. Nevertheless, research, described herein, provides a formulation for future Earth Observation Station related multisensor studies.

  11. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effortmore » was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.« less

  12. Design of a 2-mm Wavelength KIDs Prototype Camera for the Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Velázquez, M.; Ferrusca, D.; Castillo-Dominguez, E.; Ibarra-Medel, E.; Ventura, S.; Gómez-Rivera, V.; Hughes, D.; Aretxaga, I.; Grant, W.; Doyle, S.; Mauskopf, P.

    2016-08-01

    A new camera is being developed for the Large Millimeter Telescope (Sierra Negra, México) by an international collaboration with the University of Massachusetts, the University of Cardiff, and Arizona State University. The camera is based on kinetic inductance detectors (KIDs), a very promising technology due to their sensitivity and especially, their compatibility with frequency domain multiplexing at microwave frequencies allowing large format arrays, in comparison with other detection technologies for mm-wavelength astronomy. The instrument will have a 100 pixels array of KIDs to image the 2-mm wavelength band and is designed for closed cycle operation using a pulse tube cryocooler along with a three-stage sub-kelvin 3He cooler to provide a 250 mK detector stage. RF cabling is used to readout the detectors from room temperature to 250 mK focal plane, and the amplification stage is achieved with a low-noise amplifier operating at 4 K. The readout electronics will be based on open-source reconfigurable open architecture computing hardware in order to perform real-time microwave transmission measurements and monitoring the resonance frequency of each detector, as well as the detection process.

  13. STS-61A earth observations

    NASA Image and Video Library

    2009-06-25

    61A-200-003 (30 Oct 1985) --- A large format Linhof camera onboard the Space Shuttle Columbia provided this coastal view of Somalia. The perspective is looking north from Muqdisho (foreground) to Raas Xaafuun at the horizon. Cumulus clouds cover the Somali Desert. The elongated, thinner steak of clouds follows a topographically depressed area, a wash know as Webi Shibeli.

  14. View of Scientific Instrument Module to be flown on Apollo 15

    NASA Image and Video Library

    1971-06-27

    S71-2250X (June 1971) --- A close-up view of the Scientific Instrument Module (SIM) to be flown for the first time on the Apollo 15 lunar landing mission. Mounted in a previously vacant sector of the Apollo Service Module (SM), the SIM carries specialized cameras and instrumentation for gathering lunar orbit scientific data. SIM equipment includes a laser altimeter for accurate measurement of height above the lunar surface; a large-format panoramic camera for mapping, correlated with a metric camera and the laser altimeter for surface mapping; a gamma ray spectrometer on a 25-feet extendible boom; a mass spectrometer on a 21-feet extendible boom; X-ray and alpha particle spectrometers; and a subsatellite which will be injected into lunar orbit carrying a particle and magnetometer, and the S-Band transponder.

  15. OSTA-3 Shuttle payload

    NASA Technical Reports Server (NTRS)

    Dillman, R. D.; Eav, B. B.; Baldwin, R. R.

    1984-01-01

    The Office of Space and Terrestrial Applications-3 payload, scheduled for flight on STS Mission 17, consists of four earth-observation experiments. The Feature Identification and Location Experiment-1 will spectrally sense and numerically classify the earth's surface into water, vegetation, bare earth, and ice/snow/cloud-cover, by means of spectra ratio techniques. The Measurement of Atmospheric Pollution from Satellite experiment will measure CO distribution in the middle and upper troposphere. The Imaging Camera-B uses side-looking SAR to create two-dimensional images of the earth's surface. The Large Format Camera/Attitude Reference System will collect metric quality color, color-IR, and black-and-white photographs for topographic mapping.

  16. BOREAS Level-0 C-130 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Dominguez, Roseanne; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), C-130 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The NASA C-130 Earth Resources aircraft can accommodate two mapping cameras during flight, each of which can be fitted with 6- or 12-inch focal-length lenses and black-and-white, natural-color, or color-IR film, depending upon requirements. Both cameras were often in operation simultaneously, although sometimes only the lower resolution camera was deployed. When both cameras were in operation, the higher resolution camera was often used in a more limited fashion. The acquired photography covers the period of April to September 1994. The aerial photography was delivered as rolls of large format (9 x 9 inch) color transparency prints, with imagery from multiple missions (hundreds of prints) often contained within a single roll. A total of 1533 frames were collected from the C-130 platform for BOREAS in 1994. Note that the level-0 C-130 transparencies are not contained on the BOREAS CD-ROM set. An inventory file is supplied on the CD-ROM to inform users of all the data that were collected. Some photographic prints were made from the transparencies. In addition, BORIS staff digitized a subset of the tranparencies and stored the images in JPEG format. The CD-ROM set contains a small subset of the collected aerial photography that were the digitally scanned and stored as JPEG files for most tower and auxiliary sites in the NSA and SSA. See Section 15 for information about how to acquire additional imagery.

  17. International Space Station: Expedition 2000

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Live footage of the International Space Station (ISS) presents an inside look at the groundwork and assembly of the ISS. Footage includes both animation and live shots of a Space Shuttle liftoff. Phil West, Engineer; Dr. Catherine Clark, Chief Scientist ISS; and Joe Edwards, Astronaut, narrate the video. The first topic of discussion is People and Communications. Good communication is a key component in our ISS endeavor. Dr. Catherine Clark uses two soup cans attached by a string to demonstrate communication. Bill Nye the Science Guy talks briefly about science aboard the ISS. Charlie Spencer, Manager of Space Station Simulators, talks about communication aboard the ISS. The second topic of discussion is Engineering. Bonnie Dunbar, Astronaut at Johnson Space Flight Center, gives a tour of the Japanese Experiment Module (JEM). She takes us inside Node 2 and the U.S. Lab Destiny. She also shows where protein crystal growth experiments are performed. Audio terminal units are used for communication in the JEM. A demonstration of solar arrays and how they are tested is shown. Alan Bell, Project Manager MRMDF (Mobile Remote Manipulator Development Facility), describes the robot arm that is used on the ISS and how it maneuvers the Space Station. The third topic of discussion is Science and Technology. Dr. Catherine Clark, using a balloon attached to a weight, drops the apparatus to the ground to demonstrate Microgravity. The bursting of the balloon is observed. Sherri Dunnette, Imaging Technologist, describes the various cameras that are used in space. The types of still cameras used are: 1) 35 mm, 2) medium format cameras, 3) large format cameras, 4) video cameras, and 5) the DV camera. Kumar Krishen, Chief Technologist ISS, explains inframetrics, infrared vision cameras and how they perform. The Short Arm Centrifuge is shown by Dr. Millard Reske, Senior Life Scientist, to subject astronauts to forces greater than 1-g. Reske is interested in the physiological effects of the eyes and the muscular system after their exposure to forces greater than 1-g.

  18. Brandburg Prominance, Namibia, Africa

    NASA Image and Video Library

    1993-01-19

    STS054-151-009 (13-19 Jan 1993) --- This large format camera's view shows the circular volcanic structure of the Brandberg mountain, which at 2630 meters (8,550 feet) is the highest point in the new nation of Namibia. The Brandberg is a major feature in the very arid Namib Desert on Africa's southwest coast. Coastal fog brings some moisture to the driest parts of the desert.

  19. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  20. Terahertz Real-Time Imaging Uncooled Arrays Based on Antenna-Coupled Bolometers or FET Developed at CEA-Leti

    NASA Astrophysics Data System (ADS)

    Simoens, François; Meilhan, Jérôme; Nicolas, Jean-Alain

    2015-10-01

    Sensitive and large-format terahertz focal plane arrays (FPAs) integrated in compact and hand-held cameras that deliver real-time terahertz (THz) imaging are required for many application fields, such as non-destructive testing (NDT), security, quality control of food, and agricultural products industry. Two technologies of uncooled THz arrays that are being studied at CEA-Leti, i.e., bolometer and complementary metal oxide semiconductor (CMOS) field effect transistors (FET), are able to meet these requirements. This paper reminds the followed technological approaches and focuses on the latest modeling and performance analysis. The capabilities of application of these arrays to NDT and security are then demonstrated with experimental tests. In particular, high technological maturity of the THz bolometer camera is illustrated with fast scanning of large field of view of opaque scenes achieved in a complete body scanner prototype.

  1. Study of Permanent Magnet Focusing for Astronomical Camera Tubes

    NASA Technical Reports Server (NTRS)

    Long, D. C.; Lowrance, J. L.

    1975-01-01

    A design is developed of a permanent magnet assembly (PMA) useful as the magnetic focusing unit for the 35 and 70 mm (diagonal) format SEC tubes. Detailed PMA designs for both tubes are given, and all data on their magnetic configuration, size, weight, and structure of magnetic shields adequate to screen the camera tube from the earth's magnetic field are presented. A digital computer is used for the PMA design simulations, and the expected operational performance of the PMA is ascertained through the calculation of a series of photoelectron trajectories. A large volume where the magnetic field uniformity is greater than 0.5% appears obtainable, and the point spread function (PSF) and modulation transfer function(MTF) indicate nearly ideal performance. The MTF at 20 cycles per mm exceeds 90%. The weight and volume appear tractable for the large space telescope and ground based application.

  2. Hurricane Bonnie, Northeast of Bermuda, Atlantic Ocean

    NASA Image and Video Library

    1992-09-20

    STS047-151-618 (19 Sept 1992) --- A large format Earth observation camera captured this scene of Hurricane Bonnie during the late phase of the mission. Bonnie was located about 500 miles from Bermuda near a point centered at 35.4 degrees north latitude and 56.8 degrees west longitude. The Linhof camera was aimed through one of Space Shuttle Endeavour's aft flight deck windows (note slight reflection at right). The crew members noticed the well defined eye in this hurricane, compared to an almost non-existent eye in the case of Hurricane Iniki, which was relatively broken up by the mission's beginning. Six NASA astronauts and a Japanese payload specialist conducted eight days of in-space research.

  3. Geomorphologic mapping of the lunar crater Tycho and its impact melt deposits

    NASA Astrophysics Data System (ADS)

    Krüger, T.; van der Bogert, C. H.; Hiesinger, H.

    2016-07-01

    Using SELENE/Kaguya Terrain Camera and Lunar Reconnaissance Orbiter Camera (LROC) data, we produced a new, high-resolution (10 m/pixel), geomorphological and impact melt distribution map for the lunar crater Tycho. The distal ejecta blanket and crater rays were investigated using LROC wide-angle camera (WAC) data (100 m/pixel), while the fine-scale morphologies of individual units were documented using high resolution (∼0.5 m/pixel) LROC narrow-angle camera (NAC) frames. In particular, Tycho shows a large coherent melt sheet on the crater floor, melt pools and flows along the terraced walls, and melt pools on the continuous ejecta blanket. The crater floor of Tycho exhibits three distinct units, distinguishable by their elevation and hummocky surface morphology. The distribution of impact melt pools and ejecta, as well as topographic asymmetries, support the formation of Tycho as an oblique impact from the W-SW. The asymmetric ejecta blanket, significantly reduced melt emplacement uprange, and the depressed uprange crater rim at Tycho suggest an impact angle of ∼25-45°.

  4. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    PubMed

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  5. Cost Factors in Scaling in SfM Collections and Processing Solutions

    NASA Astrophysics Data System (ADS)

    Cherry, J. E.

    2015-12-01

    In this talk I will discuss the economics of scaling Structure from Motion (SfM)-style collections from 1 km2 and below to 100's and 1000's of square kilometers. Considerations include the costs of the technical equipment: comparisons of small, medium, and large-format camera systems, as well as various GPS-INS systems and their impact on processing accuracy for various Ground Sampling Distances. Tradeoffs between camera formats and flight time are central. Weather conditions and planning high altitude versus low altitude flights are another economic factor, particularly in areas of persistently bad weather and in areas where ground logistics (i.e. hotel rooms and pilot incidentals) are expensive. Unique costs associated with UAS collections and experimental payloads will be discussed. Finally, the costs of equipment and labor differs in SfM processing than in conventional orthomosaic and LiDAR processing. There are opportunities for 'economies of scale' in SfM collections under certain circumstances but whether the accuracy specifications are firm/fixed or 'best effort' makes a difference.

  6. High-frame-rate infrared and visible cameras for test range instrumentation

    NASA Astrophysics Data System (ADS)

    Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1995-09-01

    Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.

  7. Absolute orbit determination using line-of-sight vector measurements between formation flying spacecraft

    NASA Astrophysics Data System (ADS)

    Ou, Yangwei; Zhang, Hongbo; Li, Bin

    2018-04-01

    The purpose of this paper is to show that absolute orbit determination can be achieved based on spacecraft formation. The relative position vectors expressed in the inertial frame are used as measurements. In this scheme, the optical camera is applied to measure the relative line-of-sight (LOS) angles, i.e., the azimuth and elevation. The LIDAR (Light radio Detecting And Ranging) or radar is used to measure the range and we assume that high-accuracy inertial attitude is available. When more deputies are included in the formation, the formation configuration is optimized from the perspective of the Fisher information theory. Considering the limitation on the field of view (FOV) of cameras, the visibility of spacecraft and the installation of cameras are investigated. In simulations, an extended Kalman filter (EKF) is used to estimate the position and velocity. The results show that the navigation accuracy can be enhanced by using more deputies and the installation of cameras significantly affects the navigation performance.

  8. Cine-servo lens technology for 4K broadcast and cinematography

    NASA Astrophysics Data System (ADS)

    Nurishi, Ryuji; Wakazono, Tsuyoshi; Usui, Fumiaki

    2015-09-01

    Central to the rapid evolution of 4K image capture technology in the past few years, deployment of large-format cameras with Super35mm Single Sensors is increasing in TV production for diverse shows such as dramas, documentaries, wildlife, and sports. While large format image capture has been the standard in the cinema world for quite some time, the recent experiences within the broadcast industry have revealed a variety of requirement differences for large format lenses compared to those of the cinema industry. A typical requirement for a broadcast lens is a considerably higher zoom ratio in order to avoid changing lenses in the middle of a live event, which is mostly not the case for traditional cinema productions. Another example is the need for compact size, light weight, and servo operability for a single camera operator shooting in a shoulder-mount ENG style. On the other hand, there are new requirements that are common to both worlds, such as smooth and seamless change in angle of view throughout the long zoom range, which potentially offers new image expression that never existed in the past. This paper will discuss the requirements from the two industries of cinema and broadcast, while at the same time introducing the new technologies and new optical design concepts applied to our latest "CINE-SERVO" lens series which presently consists of two models, CN7x17KAS-S and CN20x50IAS-H. It will further explain how Canon has realized 4K optical performance and fast servo control while simultaneously achieving compact size, light weight and high zoom ratio, by referring to patent-pending technologies such as the optical power layout, lens construction, and glass material combinations.

  9. First Light for USNO 1.3-meter Telescope

    NASA Astrophysics Data System (ADS)

    Monet, A. K. B.; Harris, F. H.; Harris, H. C.; Monet, D. G.; Stone, R. C.

    2001-11-01

    The US Naval Observatory Flagstaff Station has recently achieved first light with its newest telescope -- a 1.3--meter, f/4 modified Ritchey-Chretien,located on the grounds of the station. The instrument was designed to produce a well-corrected field 1.7--degrees in diameter, and is expected to provide wide-field imaging with excellent astrometric properties. A number of test images have been obtained, using a temporary CCD camera in both drift and stare mode, and the results have been quite encouraging. Several astrometric projects are planned for this instrument, which will be operated in fully automated fashion. This paper will describe the telescope and its planned large-format mosaic CCD camera, and will preview some of the research for which it will be employed.

  10. Eruption of Kliuchevskoi volcano

    NASA Image and Video Library

    1994-10-05

    STS068-155-094 (30 September-11 October 1994) --- (Kliuchevskoi Volcano) The crewmembers used a Linhof large format Earth observation camera to photograph this nadir view of the Kamchatka peninsula's week-old volcano. The eruption and the follow-up environmental activity was photographed from 115 nautical miles above Earth. Six NASA astronauts spent a week and a half aboard the Space Shuttle Endeavour in support of the Space Radar Laboratory 2 (SRL-2) mission.

  11. Displacement and deformation measurement for large structures by camera network

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu

    2014-03-01

    A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.

  12. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  13. REVIEW OF DEVELOPMENTS IN SPACE REMOTE SENSING FOR MONITORING RESOURCES.

    USGS Publications Warehouse

    Watkins, Allen H.; Lauer, D.T.; Bailey, G.B.; Moore, D.G.; Rohde, W.G.

    1984-01-01

    Space remote sensing systems are compared for suitability in assessing and monitoring the Earth's renewable resources. Systems reviewed include the Landsat Thematic Mapper (TM), the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR), the French Systeme Probatoire d'Observation de la Terre (SPOT), the German Shuttle Pallet Satellite (SPAS) Modular Optoelectronic Multispectral Scanner (MOMS), the European Space Agency (ESA) Spacelab Metric Camera, the National Aeronautics and Space Administration (NASA) Large Format Camera (LFC) and Shuttle Imaging Radar (SIR-A and -B), the Russian Meteor satellite BIK-E and fragment experiments and MKF-6M and KATE-140 camera systems, the ESA Earth Resources Satellite (ERS-1), the Japanese Marine Observation Satellite (MOS-1) and Earth Resources Satellite (JERS-1), the Canadian Radarsat, the Indian Resources Satellite (IRS), and systems proposed or planned by China, Brazil, Indonesia, and others. Also reviewed are the concepts for a 6-channel Shuttle Imaging Spectroradiometer, a 128-channel Shuttle Imaging Spectrometer Experiment (SISEX), and the U. S. Mapsat.

  14. SCUBA-2: The next generation wide-field imager for the James Clerk Maxwell Telescope

    NASA Astrophysics Data System (ADS)

    Holland, W. S.; Duncan, W. D.; Kelly, B. D.; Peacocke, T.; Robson, E. I.; Irwin, K. D.; Hilton, G.; Rinehart, S.; Ade, P. A. R.; Griffin, M. J.

    2000-12-01

    We describe SCUBA-2 - the next generation continuum imaging camera for the James Clerk Maxwell Telescope. The instrument will capitalise on the success of the current SCUBA camera, by having a much larger field-of- view and improved sensitivity. SCUBA-2 will be able to map the submillimetre sky several hundred times faster than SCUBA to the same noise level. Many areas of astronomy are expected to benefit - from large scale cosmological surveys to probe galaxy formation and evolution to studies of the earliest stages of star formation in our own Galaxy. Perhaps the most exciting prospect that SCUBA-2 will offer is in the statistical significance of wide-field surveys. The key science requirements of the new camera are the ability to make very deep images - reaching background confusion levels in only a couple of hours; to generate high fidelity images at two wavelengths simultaneously; to map large areas of sky (tens of degrees) to a reasonable depth in only a few hours; carry out photometry of known-position point-sources to a high accuracy. The technical design of SCUBA-2 will incorporate new technology transition-edge sensors as the detecting element, with signals being read out using multiplexed SQUID amplifiers. As in SCUBA there will be two arrays operating at 450 and 850 microns simultaneously. Fully-sampling a field-of-voew of 8 arcminutes square will require 25,600 and 6,400 pixels at 450 and 850 microns respectively (cf 91 and 37 pixels with SCUBA!). Each pixel will have diffraction-limited resolution on the sky and a sensitivity dominated by the background photon noise. SCUBA-2 is a collaboration between a number of institutions. We anticipate delivery of the final instrument to the telescope before the end of 2005.

  15. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  16. Efficient large-scale graph data optimization for intelligent video surveillance

    NASA Astrophysics Data System (ADS)

    Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming

    2017-08-01

    Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.

  17. Hubble Space Telescope Deploy, Cuba, Bahamas and Gulf of Mexico

    NASA Image and Video Library

    1990-04-29

    STS031-151-010 (25 April 1990) --- The Hubble Space Telescope (HST), still in the grasp of Discovery's Remote Manipulator System (RMS), is backdropped over Cuba and the Bahama Islands. In this scene, it has yet to have deployment of its solar array panels and its high gain antennae. This scene was captured with a large format Aero Linhof camera used by several previous flight crews to record Earth scenes.

  18. The opto-cryo-mechanical design of the short wavelength camera for the CCAT Observatory

    NASA Astrophysics Data System (ADS)

    Parshley, Stephen C.; Adams, Joseph; Nikola, Thomas; Stacey, Gordon J.

    2014-07-01

    The CCAT observatory is a 25-m class Gregorian telescope designed for submillimeter observations that will be deployed at Cerro Chajnantor (~5600 m) in the high Atacama Desert region of Chile. The Short Wavelength Camera (SWCam) for CCAT is an integral part of the observatory, enabling the study of star formation at high and low redshifts. SWCam will be a facility instrument, available at first light and operating in the telluric windows at wavelengths of 350, 450, and 850 μm. In order to trace the large curvature of the CCAT focal plane, and to suit the available instrument space, SWCam is divided into seven sub-cameras, each configured to a particular telluric window. A fully refractive optical design in each sub-camera will produce diffraction-limited images. The material of choice for the optical elements is silicon, due to its excellent transmission in the submillimeter and its high index of refraction, enabling thin lenses of a given power. The cryostat's vacuum windows double as the sub-cameras' field lenses and are ~30 cm in diameter. The other lenses are mounted at 4 K. The sub-cameras will share a single cryostat providing thermal intercepts at 80, 15, 4, 1 and 0.1 K, with cooling provided by pulse tube cryocoolers and a dilution refrigerator. The use of the intermediate temperature stage at 15 K minimizes the load at 4 K and reduces operating costs. We discuss our design requirements, specifications, key elements and expected performance of the optical, thermal and mechanical design for the short wavelength camera for CCAT.

  19. SU-E-J-17: A Study of Accelerator-Induced Cerenkov Radiation as a Beam Diagnostic and Dosimetry Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bateman, F; Tosh, R

    2014-06-01

    Purpose: To investigate accelerator-induced Cerenkov radiation imaging as a possible beam diagnostic and medical dosimetry tool. Methods: Cerenkov emission produced by clinical accelerator beams in a water phantom was imaged using a camera system comprised of a high-sensitivity thermoelectrically-cooled CCD camera coupled to a large aperture (f/0.75) objective lens with 16:1 magnification. This large format lens allows a significant amount of the available Cerenkov light to be collected and focused onto the CCD camera to form the image. Preliminary images, obtained with 6 MV photon beams, used an unshielded camera mounted horizontally with the beam normal to the water surface,more » and confirmed the detection of Cerenkov radiation. Several improvements were subsequently made including the addition of radiation shielding around the camera, and altering of the beam and camera angles to give a more favorable geometry for Cerenkov light collection. A detailed study was then undertaken over a range of electron and photon beam energies and dose rates to investigate the possibility of using this technique for beam diagnostics and dosimetry. Results: A series of images were obtained at a fixed dose rate over a range of electron energies from 6 to 20 MeV. The location of maximum intensity was found to vary linearly with the energy of the beam. A linear relationship was also found between the light observed from a fixed point on the central axis and the dose rate for both photon and electron beams. Conclusion: We have found that the analysis of images of beam-induced Cerenkov light in a water phantom has potential for use as a beam diagnostic and medical dosimetry tool. Our future goals include the calibration of the light output in terms of radiation dose and development of a tomographic system for 3D Cerenkov imaging in water phantoms and other media.« less

  20. In situ camera observations reveal major role of zooplankton in modulating marine snow formation during an upwelling-induced plankton bloom

    NASA Astrophysics Data System (ADS)

    Taucher, Jan; Stange, Paul; Algueró-Muñiz, María; Bach, Lennart T.; Nauendorf, Alice; Kolzenburg, Regina; Büdenbender, Jan; Riebesell, Ulf

    2018-05-01

    Particle aggregation and the consequent formation of marine snow alter important properties of biogenic particles (size, sinking rate, degradability), thus playing a key role in controlling the vertical flux of organic matter to the deep ocean. However, there are still large uncertainties about rates and mechanisms of particle aggregation, as well as the role of plankton community structure in modifying biomass transfer from small particles to large fast-sinking aggregates. Here we present data from a high-resolution underwater camera system that we used to observe particle size distributions and formation of marine snow (aggregates >0.5 mm) over the course of a 9-week in situ mesocosm experiment in the Eastern Subtropical North Atlantic. After an oligotrophic phase of almost 4 weeks, addition of nutrient-rich deep water (650 m) initiated the development of a pronounced diatom bloom and the subsequent formation of large marine snow aggregates in all 8 mesocosms. We observed a substantial time lag between the peaks of chlorophyll a and marine snow biovolume of 9-12 days, which is much longer than previously reported and indicates a marked temporal decoupling of phytoplankton growth and marine snow formation during our study. Despite this time lag, our observations revealed substantial transfer of biomass from small particle sizes (single phytoplankton cells and chains) to marine snow aggregates of up to 2.5 mm diameter (ESD), with most of the biovolume being contained in the 0.5-1 mm size range. Notably, the abundance and community composition of mesozooplankton had a substantial influence on the temporal development of particle size spectra and formation of marine snow aggregates: While higher copepod abundances were related to reduced aggregate formation and biomass transfer towards larger particle sizes, the presence of appendicularia and doliolids enhanced formation of large marine snow. Furthermore, we combined in situ particle size distributions with measurements of particle sinking velocity to compute instantaneous (potential) vertical mass flux. However, somewhat surprisingly, we did not find a coherent relationship between our computed flux and measured vertical mass flux (collected by sediment traps in 15 m depth). Although the onset of measured vertical flux roughly coincided with the emergence of marine snow, we found substantial variability in mass flux among mesocosms that was not related to marine snow numbers, and was instead presumably driven by zooplankton-mediated alteration of sinking biomass and export of small particles (fecal pellets). Altogether, our findings highlight the role of zooplankton community composition and feeding interactions on particle size spectra and formation of marine snow aggregates, with important implications for our understanding of particle aggregation and vertical flux of organic matter in the ocean.

  1. Astronauts Sullivan and Leestma perform in-space simulation of refueling

    NASA Image and Video Library

    1984-10-14

    S84-43432 (11 Oct. 1984) --- Appearing small in the center background of this image, astronauts Kathryn D. Sullivan, left, and David C. Leestma, both 41-G mission specialists, perform an in-space simulation of refueling another spacecraft in orbit. Their station on the space shuttle Challenger is the orbital refueling system (ORS), positioned on the mission peculiar support structure (MPR ESS). The Large Format Camera (LFC) is left of the two mission specialists. In the left foreground is the antenna for the shuttle imaging radar (SIR-B) system onboard. The Canadian-built remote manipulator system (RMS) is positioned to allow close-up recording capability of the busy scene. A 50mm lens on a 70mm camera was used to photograph this scene. Photo credit: NASA

  2. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  3. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  4. Flank vents and graben as indicators of Late Amazonian volcanotectonic activity on Olympus Mons

    NASA Astrophysics Data System (ADS)

    Peters, S. I.; Christensen, P. R.

    2017-03-01

    Previous studies have focused on large-scale features on Olympus Mons, such as its flank terraces, the summit caldera complex, and the basal escarpment and aureole deposits. Here we identify and characterize previously unrecognized and unmapped small scale features to help further understand the volcanotectonic evolution of this enormous volcano. Using Context Camera, High Resolution Imaging Science Experiment, Thermal Emission Imaging System, High Resolution Stereo Camera Digital Terrain Model, and Mars Orbiter Laser Altimeter data, we identified and characterized the morphology and distribution of 60 flank vents and 84 grabens on Olympus Mons. We find that effusive eruptions have dominated volcanic activity on Olympus Mons in the Late Amazonian. Explosive eruptions were rare, implying volatile-poor magmas and/or a lack of magma-water interactions during the Late Amazonian. The distribution of flank vents suggests dike propagation of hundreds of kilometers and shallow magma storage. Small grabens, not previously observed in lower-resolution data, occur primarily on the lower flanks of Olympus Mons and indicate late-stage extensional tectonism. Based on superposition relationships, we have concluded two stages of development for Olympus Mons during the Late Amazonian: (1) primarily effusive resurfacing and formation of flank vents followed by (2) waning effusive volcanism and graben formation and/or reactivation. This developmental sequence resembles that proposed for Ascraeus Mons and other large Martian shields, suggesting a similar geologic evolution for these volcanoes.

  5. GISMO, a 2 mm Bolometer Camera Optimized for the Study of High Redshift Galaxies

    NASA Technical Reports Server (NTRS)

    Staguhn, J.

    2007-01-01

    The 2mm spectral range provides a unique terrestrial window enabling ground based observations of the earliest active dusty galaxies in the universe and thereby allowing a better constraint on the star formation rate in these objects. We present a progress report for our bolometer camera GISMO (the Goddard-IRAM Superconducting 2-Millimeter Observer), which will obtain large and sensitive sky maps at this wavelength. The instrument will be used at the IRAM 30 m telescope and we expect to install it at the telescope in 2007. The camera uses an 8 x 16 planar array of multiplexed TES bolometers, which incorporates our recently designed Backshort Under Grid (BUG) architecture. GISMO will be very efficient at detecting sources serendipitously in large sky surveys. With the background limited performance of the detectors, the camera provides significantly greater imaging sensitivity and mapping speed at this wavelength than has previously been possible. The major scientific driver for the instrument is to provide the IRAM 30 m telescope with the capability to rapidly observe galactic and extragalactic dust emission, in particular from high-zeta ULI RGs and quasar s, even in the summer season. The instrument will fill in the SEDs of high redshift galaxies at the Rayleigh-Jeans part of the dust emission spectrum, even at the highest redshifts. Our source count models predict that GISMO will serendipitously detect one galaxy every four hours on the blank sky, and that one quarter of these galaxies will be at a redshift of zeta 6.5.

  6. Camera Perspective Bias in Videotaped Confessions: Evidence that Visual Attention Is a Mediator

    ERIC Educational Resources Information Center

    Ware, Lezlee J.; Lassiter, G. Daniel; Patterson, Stephen M.; Ransom, Michael R.

    2008-01-01

    Several experiments have demonstrated a "camera perspective bias" in evaluations of videotaped confessions: videotapes with the camera focused on the suspect lead to judgments of greater voluntariness than alternative presentation formats. The present research investigated potential mediators of this bias. Using eye tracking to measure visual…

  7. New Airborne Sensors and Platforms for Solving Specific Tasks in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Kemper, G.

    2012-07-01

    A huge number of small and medium sized sensors entered the market. Today's mid format sensors reach 80 MPix and allow to run projects of medium size, comparable with the first big format digital cameras about 6 years ago. New high quality lenses and new developments in the integration prepared the market for photogrammetric work. Companies as Phase One or Hasselblad and producers or integrators as Trimble, Optec, and others utilized these cameras for professional image production. In combination with small camera stabilizers they can be used also in small aircraft and make the equipment small and easy transportable e.g. for rapid assessment purposes. The combination of different camera sensors enables multi or hyper-spectral installations e.g. useful for agricultural or environmental projects. Arrays of oblique viewing cameras are in the market as well, in many cases these are small and medium format sensors combined as rotating or shifting devices or just as a fixed setup. Beside the proper camera installation and integration, also the software that controls the hardware and guides the pilot has to solve much more tasks than a normal FMS did in the past. Small and relatively cheap Laser Scanners (e.g. Riegl) are in the market and a proper combination with MS Cameras and an integrated planning and navigation is a challenge that has been solved by different softwares. Turnkey solutions are available e.g. for monitoring power line corridors where taking images is just a part of the job. Integration of thermal camera systems with laser scanner and video capturing must be combined with specific information of the objects stored in a database and linked when approaching the navigation point.

  8. Small format digital photogrammetry for applications in the earth sciences

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, Dirk

    2010-05-01

    Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.

  9. Development of high energy micro-tomography system at SPring-8

    NASA Astrophysics Data System (ADS)

    Uesugi, Kentaro; Hoshino, Masato

    2017-09-01

    A high energy X-ray micro-tomography system has been developed at BL20B2 in SPring-8. The available range of the energy is between 20keV and 113keV with a Si (511) double crystal monochromator. The system enables us to image large or heavy materials such as fossils and metals. The X-ray image detector consists of visible light conversion system and sCMOS camera. The effective pixel size is variable by changing a tandem lens between 6.5 μm/pixel and 25.5 μm/pixel discretely. The format of the camera is 2048 pixels x 2048 pixels. As a demonstration of the system, alkaline battery and a nodule from Bolivia were imaged. A detail of the structure of the battery and a female mold Trilobite were successfully imaged without breaking those fossils.

  10. Velocity visualization in gaseous flows

    NASA Technical Reports Server (NTRS)

    Hanson, R. K.

    1985-01-01

    Techniques are established for visualizing velocity in gaseous flows. Two approaches are considered, both of which are capable of yielding velocity simultaneously at a large number of flowfield locations, thereby providing images of velocity. The first technique employs a laser to mark specific fluid elements and a camera to track their subsequent motion. Marking is done by laser-induced phosphorescence of biacetyl, added as a tracer species in a flow of N2, or by laser-induced formation of sulfur particulates in SF6-H2-N2 mixtures. The second technique is based on the Doppler effect, and uses an intensified photodiode array camera and a planar form of laser-induced fluorescence to detect 2-d velocities of I2 (in I2-N2 mixtures) via Doppler-shifted absorption of narrow-linewidth laser radiation at 514.5 nm.

  11. A search for Earth-crossing asteroids, supplement

    NASA Technical Reports Server (NTRS)

    Taff, L. G.; Sorvari, J. M.; Kostishack, D. F.

    1984-01-01

    The ground based electro-optical deep space surveillance program involves a network of computer controlled 40 inch 1m telescopes equipped with large format, low light level, television cameras of the intensified silicon diode array type which is to replace the Baker-Nunn photographic camera system for artificial satellite tracking. A prototype observatory was constructed where distant artificial satellites are discriminated from stars in real time on the basis of the satellites' proper motion. Hardware was modified and the technique was used to observe and search for minor planets. Asteroids are now routinely observed and searched. The complete observing cycle, including the 2"-3" measurement of position, requires about four minutes at present. The commonality of asteroids and artificial satellite observing, searching, data reduction, and orbital analysis is stressed. Improvements to the hardware and software as well as operational techniques are considered.

  12. Curved CCD detector devices and arrays for multispectral astrophysical applications and terrestrial stereo panoramic cameras

    NASA Astrophysics Data System (ADS)

    Swain, Pradyumna; Mark, David

    2004-09-01

    The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.

  13. Evidence for persistent flow and aqueous sedimentation on early Mars.

    PubMed

    Malin, Michael C; Edgett, Kenneth S

    2003-12-12

    Landforms representative of sedimentary processes and environments that occurred early in martian history have been recognized in Mars Global Surveyor Mars Orbiter Camera and Mars Odyssey Thermal Emission Imaging System images. Evidence of distributary, channelized flow (in particular, flow that lasted long enough to foster meandering) and the resulting deposition of a fan-shaped apron of debris indicate persistent flow conditions and formation of at least some large intracrater layered sedimentary sequences within fluvial, and potentially lacustrine, environments.

  14. 2nd EVA - MS Foale and Nicollier during FGS changeout

    NASA Image and Video Library

    1999-12-24

    STS103-501-026 (19 - 27 December 1999) --- Astronauts C. Michael Foale, left, and Claude Nicollier (on Discovery's robotic arm) install a Fine Guidance Sensor (FGS) into a protective enclosure in the Shuttle’s payload bay. Foale and Nicollier performed the second of three space walks to service the Hubble Space Telescope (HST) on the STS-103 mission. A large format camera inside Discovery's cabin was used to record this high-resolution image, while the Shuttle was orbiting above ocean and clouds.

  15. Phootprint - A Phobos sample return mission study

    NASA Astrophysics Data System (ADS)

    Koschny, Detlef; Svedhem, Håkan; Rebuffat, Denis

    Introduction ESA is currently studying a mission to return a sample from Phobos, called Phootprint. This study is performed as part of ESA’s Mars Robotic Exploration Programme. Part of the mission goal is to prepare technology needed for a sample return mission from Mars itself; the mission should also have a strong scientific justification, which is described here. 1. Science goal The main science goal of this mission will be to Understand the formation of the Martian moons Phobos and put constraints on the evolution of the solar system. Currently, there are several possibilities for explaining the formation of the Martian moons: (a) co-formation with Mars (b) capture of objects coming close to Mars (c) Impact of a large body onto Mars and formation from the impact ejecta The main science goal of this mission is to find out which of the three scenarios is the most probable one. To do this, samples from Phobos would be returned to Earth and analyzed with extremely high precision in ground-based laboratories. An on-board payload is foreseen to provide information to put the sample into the necessary geological context. 2. Mission Spacecraft and payload will be based on experience gained from previous studies to Martian moons and asteroids. In particular the Marco Polo and MarcoPolo-R asteroid sample return mission studies performed at ESA were used as a starting point. Currently, industrial studies are ongoing. The initial starting assumption was to use a Soyuz launcher. Uunlike the initial Marco Polo and MarcoPolo-R studies to an asteroid, a transfer stage will be needed. Another main difference to an asteroid mission is the fact that the spacecraft actually orbits Mars, not Phobos or Deimos. It is possible to select a spacecraft orbit, which in a Phobos- or Deimos-centred reference system would give an ellipse around the moon. The following model payload is currently foreseen: - Wide Angle Camera, - Narrow Angle Camera, - Close-Up Camera, - Context camera for sampling context, - visible-IR spectrometer - thermal IR spectrometer - and a Radio Science investigation. It is expected that with these instruments the necessary context for the sample can be provided. The paper will focus on the current status of the mission study.

  16. Numerical analysis of wavefront measurement characteristics by using plenoptic camera

    NASA Astrophysics Data System (ADS)

    Lv, Yang; Ma, Haotong; Zhang, Xuanzhe; Ning, Yu; Xu, Xiaojun

    2016-01-01

    To take advantage of the large-diameter telescope for high-resolution imaging of extended targets, it is necessary to detect and compensate the wave-front aberrations induced by atmospheric turbulence. Data recorded by Plenoptic cameras can be used to extract the wave-front phases associated to the atmospheric turbulence in an astronomical observation. In order to recover the wave-front phase tomographically, a method of completing the large Field Of View (FOV), multi-perspective wave-front detection simultaneously is urgently demanded, and it is plenoptic camera that possesses this unique advantage. Our paper focuses more on the capability of plenoptic camera to extract the wave-front from different perspectives simultaneously. In this paper, we built up the corresponding theoretical model and simulation system to discuss wave-front measurement characteristics utilizing plenoptic camera as wave-front sensor. And we evaluated the performance of plenoptic camera with different types of wave-front aberration corresponding to the occasions of applications. In the last, we performed the multi-perspective wave-front sensing employing plenoptic camera as wave-front sensor in the simulation. Our research of wave-front measurement characteristics employing plenoptic camera is helpful to select and design the parameters of a plenoptic camera, when utilizing which as multi-perspective and large FOV wave-front sensor, which is expected to solve the problem of large FOV wave-front detection, and can be used for AO in giant telescopes.

  17. Single Particle Damage Events in Candidate Star Camera Sensors

    NASA Technical Reports Server (NTRS)

    Marshall, Paul; Marshall, Cheryl; Polidan, Elizabeth; Wacyznski, Augustyn; Johnson, Scott

    2005-01-01

    Si charge coupled devices (CCDs) are currently the preeminent detector in star cameras as well as in the near ultraviolet (uv) to visible wavelength region for astronomical observations in space and in earth-observing space missions. Unfortunately, the performance of CCDs is permanently degraded by total ionizing dose (TID) and displacement damage effects. TID produces threshold voltage shifts on the CCD gates and displacement damage reduces the charge transfer efficiency (CTE), increases the dark current, produces dark current nonuniformities and creates random telegraph noise in individual pixels. In addition to these long term effects, cosmic ray and trapped proton transients also interfere with device operation on orbit. In the present paper, we investigate the dark current behavior of CCDs - in particular the formation and annealing of hot pixels. Such pixels degrade the ability of a CCD to perform science and also can present problems to the performance of star camera functions (especially if their numbers are not correctly anticipated). To date, most dark current radiation studies have been performed by irradiating the CCDs at room temperature but this can result in a significantly optimistic picture of the hot pixel count. We know from the Hubble Space Telescope (HST) that high dark current pixels (so-called hot pixels or hot spikes) accumulate as a function of time on orbit. For example, the HST Advanced Camera for Surveys/Wide Field Camera instrument performs monthly anneals despite the loss of observational time, in order to partially anneal the hot pixels. Note that the fact that significant reduction in hot pixel populations occurs for room temperature anneals is not presently understood since none of the commonly expected defects in Si (e.g. divacancy, E center, and A-center) anneal at such a low temperature. A HST Wide Field Camera 3 (WFC3) CCD manufactured by E2V was irradiated while operating at -83C and the dark current studied as a function of temperature while the CCD was warmed to a sequence of temperatures up to a maximum of +30C. The device was then cooled back down to -83 and re-measured. Hot pixel populations were tracked during the warm-up and cool-down. Hot pixel annealing began below 40C and the anneal process was largely completed before the detector reached +3OC. There was no apparent sharp temperature dependence in the annealing. Although a large fraction of the hot pixels fell below the threshold to be counted as a hot pixel, they nevertheless remained warmer than the remaining population. The details of the mechanism for the formation and annealing of hot pixels is not presently understood, but it appears likely that hot pixels are associated with displacement damage occurring in high electric field regions.

  18. Seeing Red: Discourse, Metaphor, and the Implementation of Red Light Cameras in Texas

    ERIC Educational Resources Information Center

    Hayden, Lance Alan

    2009-01-01

    This study examines the deployment of automated red light camera systems in the state of Texas from 2003 through late 2007. The deployment of new technologies in general, and surveillance infrastructures in particular, can prove controversial and challenging for the formation of public policy. Red light camera surveillance during this period in…

  19. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    USGS Publications Warehouse

    Hobbs, Michael T.; Brehme, Cheryl S.

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  20. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.

    PubMed

    Hobbs, Michael T; Brehme, Cheryl S

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  1. New Modular Camera No Ordinary Joe

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Although dubbed 'Little Joe' for its small-format characteristics, a new wavefront sensor camera has proved that it is far from coming up short when paired with high-speed, low-noise applications. SciMeasure Analytical Systems, Inc., a provider of cameras and imaging accessories for use in biomedical research and industrial inspection and quality control, is the eye behind Little Joe's shutter, manufacturing and selling the modular, multi-purpose camera worldwide to advance fields such as astronomy, neurobiology, and cardiology.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leary, T.J.; Lamb, A.

    The Department of Energy`s Office of Arms Control and Non-Proliferation (NN-20) has developed a suite of airborne remote sensing systems that simultaneously collect coincident data from a US Navy P-3 aircraft. The primary objective of the Airborne Multisensor Pod System (AMPS) Program is {open_quotes}to collect multisensor data that can be used for data research, both to reduce interpretation problems associated with data overload and to develop information products more complete than can be obtained from any single sensor.{close_quotes} The sensors are housed in wing-mounted pods and include: a Ku-Band Synthetic Aperture Radar; a CASI Hyperspectral Imager; a Daedalus 3600 Airbornemore » Multispectral Scanner; a Wild Heerbrugg RC-30 motion compensated large format camera; various high resolution, light intensified and thermal video cameras; and several experimental sensors (e.g. the Portable Hyperspectral Imager of Low-Light Spectroscopy (PHILLS)). Over the past year or so, the Coastal Marine Resource Assessment (CAMRA) group at the Florida Department of Environmental Protection`s Marine Research Institute (FMRI) has been working with the Department of Energy through the Naval Research Laboratory to develop applications and products from existing data. Considerable effort has been spent identifying image formats integration parameters. 2 refs., 3 figs., 2 tabs.« less

  3. Calibration Procedures in Mid Format Camera Setups

    NASA Astrophysics Data System (ADS)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied. However, there is a misalignment (bore side angle) that must be evaluated by photogrammetric process using advanced tools e.g. in Bingo. Once, all these parameters have been determined, the system is capable for projects without or with only a few ground control points. But which effect has the photogrammetric process when directly applying the achieved direct orientation values compared with an AT based on a proper tiepoint matching? The paper aims to show the steps to be done by potential users and gives a kind of quality estimation about the importance and quality influence of the various calibration and adjustment steps.

  4. Overview of diagnostic implementation on Proto-MPEX at ORNL

    NASA Astrophysics Data System (ADS)

    Biewer, T. M.; Bigelow, T.; Caughman, J. B. O.; Fehling, D.; Goulding, R. H.; Gray, T. K.; Isler, R. C.; Martin, E. H.; Meitner, S.; Rapp, J.; Unterberg, E. A.; Dhaliwal, R. S.; Donovan, D.; Kafle, N.; Ray, H.; Shaw, G. C.; Showers, M.; Mosby, R.; Skeen, C.

    2015-11-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) recently began operating with an expanded diagnostic set. Approximately 100 sightlines have been established, delivering the plasma light emission to a ``patch panel'' in the diagnostic room for distribution to a variety of instruments: narrow-band filter spectroscopy, Doppler spectroscopy, laser induced breakdown spectroscopy, optical emission spectroscopy, and Thomson scattering. Additional diagnostic systems include: IR camera imaging, in-vessel thermocouples, ex-vessel fluoroptic probes, fast pressure gauges, visible camera imaging, microwave interferometry, a retarding-field energy analyzer, rf-compensated and ``double'' Langmuir probes, and B-dot probes. A data collection and archival system has been initiated using the MDSplus format. This effort capitalizes on a combination of new and legacy diagnostic hardware at ORNL and was accomplished largely through student labor. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  5. Rollout - Shuttle Discovery - STS 41D Launch - KSC

    NASA Image and Video Library

    1986-11-26

    S86-41700 (19 May 1984) --- The Space Shuttle Discovery moves towards Pad A on the crawler transporter for its maiden flight. Discovery will be launched on its first mission no earlier than June 19, 1984. Flight 41-D will carry a crew of six; Commander Henry Hartsfield, Pilot Mike Coats, Mission Specialists Dr. Judith Resnik, Dr. Steven Hawley and Richard Mullane and Payload Specialist Charles Walker. Walker is the first payload specialist to fly aboard a space shuttle. He will be running the materials processing device developed by McDonnell Douglas as part of its Electrophoresis Operations in Space project. Mission 41-D is scheduled to be a seven-day flight and to land at Edwards Air Force Base in California. The Syncom IV-1 (LEASAT) will be deployed from Discovery's cargo bay and the OAST-1, Large Format Camera, IMAX and Cinema 360 cameras will be aboard.

  6. Body worn camera

    NASA Astrophysics Data System (ADS)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  7. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    PubMed Central

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing. PMID:28981533

  8. The AzTEC millimeter-wave camera: Design, integration, performance, and the characterization of the (sub-)millimeter galaxy population

    NASA Astrophysics Data System (ADS)

    Austermann, Jason Edward

    One of the primary drivers in the development of large format millimeter detector arrays is the study of sub-millimeter galaxies (SMGs) - a population of very luminous high-redshift dust-obscured starbursts that are widely believed to be the dominant contributor to the Far-Infrared Background (FIB). The characterization of such a population requires the ability to map large patches of the (sub-)millimeter sky to high sensitivity within a feasible amount of time. I present this dissertation on the design, integration, and characterization of the 144-pixel AzTEC millimeter-wave camera and its application to the study of the sub-millimeter galaxy population. In particular, I present an unprecedented characterization of the "blank-field" (fields with no known mass bias) SMG number counts by mapping over 0.5 deg^2 to 1.1mm depths of ~1mJy - a previously unattained depth on these scales. This survey provides the tightest SMG number counts available, particularly for the brightest and rarest SMGs that require large survey areas for a significant number of detections. These counts are compared to the predictions of various models of the evolving mm/sub-mm source population, providing important constraints for the ongoing refinement of semi-analytic and hydrodynamical models of galaxy formation. I also present the results of an AzTEC 0.15 deg^2 survey of the COSMOS field, which uncovers a significant over-density of bright SMGs that are spatially correlated to foreground mass structures, presumably as a result of gravitational lensing. Finally, I compare the results of the available SMG surveys completed to date and explore the effects of cosmic variance on the interpretation of individual surveys.

  9. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    PubMed

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  10. 1920x1080 pixel color camera with progressive scan at 50 to 60 frames per second

    NASA Astrophysics Data System (ADS)

    Glenn, William E.; Marcinka, John W.

    1998-09-01

    For over a decade, the broadcast industry, the film industry and the computer industry have had a long-range objective to originate high definition images with progressive scan. This produces images with better vertical resolution and much fewer artifacts than interlaced scan. Computers almost universally use progressive scan. The broadcast industry has resisted switching from interlace to progressive because no cameras were available in that format with the 1920 X 1080 resolution that had obtained international acceptance for high definition program production. The camera described in this paper produces an output in that format derived from two 1920 X 1080 CCD sensors produced by Eastman Kodak.

  11. Gallium arsenide quantum well-based far infrared array radiometric imager

    NASA Technical Reports Server (NTRS)

    Forrest, Kathrine A.; Jhabvala, Murzy D.

    1991-01-01

    We have built an array-based camera (FIRARI) for thermal imaging (lambda = 8 to 12 microns). FIRARI uses a square format 128 by 128 element array of aluminum gallium arsenide quantum well detectors that are indium bump bonded to a high capacity silicon multiplexer. The quantum well detectors offer good responsivity along with high response and noise uniformity, resulting in excellent thermal images without compensation for variation in pixel response. A noise equivalent temperature difference of 0.02 K at a scene temperature of 290 K was achieved with the array operating at 60 K. FIRARI demonstrated that AlGaAS quantum well detector technology can provide large format arrays with performance superior to mercury cadmium telluride at far less cost.

  12. Cluster Lensing with the BTC

    NASA Astrophysics Data System (ADS)

    Fischer, P.

    1997-12-01

    Weak distortions of background galaxies are rapidly emerging as a powerful tool for the measurement of galaxy cluster mass distributions. Lensing based studies have the advantage of being direct measurements of mass and are not model-dependent as are other techniques (X-ray, radial velocities). To date studies have been limited by CCD field size meaning that full coverage of the clusters out to the virial radii and beyond has not been possible. Probing this large radius region is essential for testing models of large scale structure formation. New wide field CCD mosaics, for the first time, allow mass measurements out to very large radius. We have obtained images for a sample of clusters with the ``Big Throughput Camera'' (BTC) on the CTIO 4m. This camera comprises four thinned SITE 2048(2) CCDs, each 15arcmin on a side for a total area of one quarter of a square degree. We have developed an automated reduction pipeline which: 1) corrects for spatial distortions, 2) corrects for PSF anisotropy, 3) determines relative scaling and background levels, and 4) combines multiple exposures. In this poster we will present some preliminary results of our cluster lensing study. This will include radial mass and light profiles and 2-d mass and galaxy density maps.

  13. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  14. Motmot, an open-source toolkit for realtime video acquisition and analysis.

    PubMed

    Straw, Andrew D; Dickinson, Michael H

    2009-07-22

    Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.

  15. The Topographic Data Deluge - Collecting and Maintaining Data in a 21ST Century Mapping Agency

    NASA Astrophysics Data System (ADS)

    Holland, D. A.; Pook, C.; Capstick, D.; Hemmings, A.

    2016-06-01

    In the last few years, the number of sensors and data collection systems available to a mapping agency has grown considerably. In the field, in addition to total stations measuring position, angles and distances, the surveyor can choose from hand-held GPS devices, multi-lens imaging systems or laser scanners, which may be integrated with a laptop or tablet to capture topographic data directly in the field. These systems are joined by mobile mapping solutions, mounted on large or small vehicles, or sometimes even on a backpack carried by a surveyor walking around a site. Such systems allow the raw data to be collected rapidly in the field, while the interpretation of the data can be performed back in the office at a later date. In the air, large format digital cameras and airborne lidar sensors are being augmented with oblique camera systems, taking multiple views at each camera position and being used to create more realistic 3D city models. Lower down in the atmosphere, Unmanned Aerial Vehicles (or Remotely Piloted Aircraft Systems) have suddenly become ubiquitous. Hundreds of small companies have sprung up, providing images from UAVs using ever more capable consumer cameras. It is now easy to buy a 42 megapixel camera off the shelf at the local camera shop, and Canon recently announced that they are developing a 250 megapixel sensor for the consumer market. While these sensors may not yet rival the metric cameras used by today's photogrammetrists, the rapid developments in sensor technology could eventually lead to the commoditization of high-resolution camera systems. With data streaming in from so many sources, the main issue for a mapping agency is how to interpret, store and update the data in such a way as to enable the creation and maintenance of the end product. This might be a topographic map, ortho-image or a digital surface model today, but soon it is just as likely to be a 3D point cloud, textured 3D mesh, 3D city model, or Building Information Model (BIM) with all the data interpretation and modelling that entails. In this paper, we describe research/investigations into the developing technologies and outline the findings for a National Mapping Agency (NMA). We also look at the challenges that these new data collection systems will bring to an NMA, and suggest ways that we may work to meet these challenges and deliver the products desired by our users.

  16. The threshold of vapor channel formation in water induced by pulsed CO2 laser

    NASA Astrophysics Data System (ADS)

    Guo, Wenqing; Zhang, Xianzeng; Zhan, Zhenlin; Xie, Shusen

    2012-12-01

    Water plays an important role in laser ablation. There are two main interpretations of laser-water interaction: hydrokinetic effect and vapor phenomenon. The two explanations are reasonable in some way, but they can't explain the mechanism of laser-water interaction completely. In this study, the dynamic process of vapor channel formation induced by pulsed CO2 laser in static water layer was monitored by high-speed camera. The wavelength of pulsed CO2 laser is 10.64 um, and pulse repetition rate is 60 Hz. The laser power ranged from 1 to 7 W with a step of 0.5 W. The frame rate of high-speed camera used in the experiment was 80025 fps. Based on high-speed camera pictures, the dynamic process of vapor channel formation was examined, and the threshold of vapor channel formation, pulsation period, the volume, the maximum depth and corresponding width of vapor channel were determined. The results showed that the threshold of vapor channel formation was about 2.5 W. Moreover, pulsation period, the maximum depth and corresponding width of vapor channel increased with the increasing of the laser power.

  17. A multiscale video system for studying an optical phenomena during active experiments in the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Nikolashkin, S. V.; Reshetnikov, A. A.

    2017-11-01

    The system of video surveillance during active rocket experiments in the Polar geophysical observatory "Tixie" and studies of the effects of "Soyuz" vehicle launches from the "Vostochny" cosmodrome over the territory of the Republic of Sakha (Yakutia) is presented. The created system consists of three AHD video cameras with different angles of view mounted on a common platform mounted on a tripod with the possibility of manual guiding. The main camera with high-sensitivity black and white CCD matrix SONY EXview HADII is equipped depending on the task with lenses "MTO-1000" (F = 1000 mm) or "Jupiter-21M " (F = 300 mm) and is designed for more detailed shooting of luminous formations. The second camera of the same type, but with a 30 degree angle of view. It is intended for shooting of the general plan and large objects, and also for a binding of coordinates of object on stars. The third color wide-angle camera (120 degrees) is designed to be connected to landmarks in the daytime, the optical axis of this channel is directed at 60 degrees down. The data is recorded on the hard disk of a four-channel digital video recorder. Tests of the original version of the system with two channels were conducted during the launch of the geophysical rocket in Tixie in September 2015 and showed its effectiveness.

  18. Evaluation of large format electron bombarded virtual phase CCDs as ultraviolet imaging detectors

    NASA Technical Reports Server (NTRS)

    Opal, Chet B.; Carruthers, George R.

    1989-01-01

    In conjunction with an external UV-sensitive cathode, an electron-bombarded CCD may be used as a high quantum efficiency/wide dynamic range photon-counting UV detector. Results are presented for the case of a 1024 x 1024, 18-micron square pixel virtual phase CCD used with an electromagnetically focused f/2 Schmidt camera, which yields excellent simgle-photoevent discrimination and counting efficiency. Attention is given to the vacuum-chamber arrangement used to conduct system tests and the CCD electronics and data-acquisition systems employed.

  19. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  20. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  1. Structure Formation in Complex Plasma

    DTIC Science & Technology

    2011-08-24

    Dewer bottle (upper figures) or in the vapor of liquid helium (lower figures). Liq. He Ring electrode Particles Green Laser RF Plasma ... Ring electrode CCD camera Prism mirror Liq. He Glass Tube Liq. N2 Glass Dewar Acrylic particles Gas Helium Green Laser CCD camera Pressure

  2. FRIPON, the French fireball network

    NASA Astrophysics Data System (ADS)

    Colas, F.; Zanda, B.; Bouley, S.; Vaubaillon, J.; Marmo, C.; Audureau, Y.; Kwon, M. K.; Rault, J. L.; Caminade, S.; Vernazza, P.; Gattacceca, J.; Birlan, M.; Maquet, L.; Egal, A.; Rotaru, M.; Gruson-Daniel, Y.; Birnbaum, C.; Cochard, F.; Thizy, O.

    2015-10-01

    FRIPON (Fireball Recovery and InterPlanetary Observation Network) [4](Colas et al, 2014) was recently founded by ANR (Agence Nationale de la Recherche). Its aim is to connect meteoritical science with asteroidal and cometary science in order to better understand solar system formation and evolution. The main idea is to set up an observation network covering all the French territory to collect a large number of meteorites (one or two per year) with accurate orbits, allowing us to pinpoint possible parent bodies. 100 all-sky cameras will be installed at the end of 2015 forming a dense network with an average distance of 100km between stations. To maximize the accuracy of orbit determination, we will mix our optical data with radar data from the GRAVES beacon received by 25 stations [5](Rault et al, 2015). As both the setting up of the network and the creation of search teams for meteorites will need manpower beyond our small team of professionals, we are developing a citizen science network called Vigie-Ciel [6](Zanda et al, 2015). The public at large will thus be able to simply use our data, participate in search campaigns or even setup their own cameras.

  3. Detail of large industrial doors on north elevation; camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail of large industrial doors on north elevation; camera facing south. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA

  4. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  5. Orbital-science investigation: Part C: photogrammetry of Apollo 15 photography

    USGS Publications Warehouse

    Wu, Sherman S.C.; Schafer, Francis J.; Jordan, Raymond; Nakata, Gary M.; Derick, James L.

    1972-01-01

    Mapping of large areas of the Moon by photogrammetric methods was not seriously considered until the Apollo 15 mission. In this mission, a mapping camera system and a 61-cm optical-bar high-resolution panoramic camera, as well as a laser altimeter, were used. The mapping camera system comprises a 7.6-cm metric terrain camera and a 7.6-cm stellar camera mounted in a fixed angular relationship (an angle of 96° between the two camera axes). The metric camera has a glass focal-plane plate with reseau grids. The ground-resolution capability from an altitude of 110 km is approximately 20 m. Because of the auxiliary stellar camera and the laser altimeter, the resulting metric photography can be used not only for medium- and small-scale cartographic or topographic maps, but it also can provide a basis for establishing a lunar geodetic network. The optical-bar panoramic camera has a 135- to 180-line resolution, which is approximately 1 to 2 m of ground resolution from an altitude of 110 km. Very large scale specialized topographic maps for supporting geologic studies of lunar-surface features can be produced from the stereoscopic coverage provided by this camera.

  6. Effects of red light camera enforcement on fatal crashes in large U.S. cities.

    PubMed

    Hu, Wen; McCartt, Anne T; Teoh, Eric R

    2011-08-01

    To estimate the effects of red light camera enforcement on per capita fatal crash rates at intersections with signal lights. From the 99 large U.S. cities with more than 200,000 residents in 2008, 14 cities were identified with red light camera enforcement programs for all of 2004-2008 but not at any time during 1992-1996, and 48 cities were identified without camera programs during either period. Analyses compared the citywide per capita rate of fatal red light running crashes and the citywide per capita rate of all fatal crashes at signalized intersections during the two study periods, and rate changes then were compared for cities with and without cameras programs. Poisson regression was used to model crash rates as a function of red light camera enforcement, land area, and population density. The average annual rate of fatal red light running crashes declined for both study groups, but the decline was larger for cities with red light camera enforcement programs than for cities without camera programs (35% vs. 14%). The average annual rate of all fatal crashes at signalized intersections decreased by 14% for cities with camera programs and increased slightly (2%) for cities without cameras. After controlling for population density and land area, the rate of fatal red light running crashes during 2004-2008 for cities with camera programs was an estimated 24% lower than what would have been expected without cameras. The rate of all fatal crashes at signalized intersections during 2004-2008 for cities with camera programs was an estimated 17% lower than what would have been expected without cameras. Red light camera enforcement programs were associated with a statistically significant reduction in the citywide rate of fatal red light running crashes and a smaller but still significant reduction in the rate of all fatal crashes at signalized intersections. The study adds to the large body of evidence that red light camera enforcement can prevent the most serious crashes. Communities seeking to reduce crashes at intersections should consider this evidence. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. High-Resolution Large Field-of-View FUV Compact Camera

    NASA Technical Reports Server (NTRS)

    Spann, James F.

    2006-01-01

    The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.

  8. Mechanical Design of the LSST Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordby, Martin; Bowden, Gordon; Foss, Mike

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors inmore » image reconstruction. Design and analysis for the camera body and cryostat will be detailed.« less

  9. Study of Cryogenic Complex Plasma

    DTIC Science & Technology

    2007-04-26

    enabled us to detect the formation of the Coulomb crystals as shown in Fig. 2. Liq. He Ring electrode Particles Green Laser RF Plasma ... Ring electrode CCD camera Prism mirror Liq. He Glass Tube Liq. N2 Glass Dewar Acrylic particles Gas Helium Green Laser CCD camera Pressure

  10. BigView Image Viewing on Tiled Displays

    NASA Technical Reports Server (NTRS)

    Sandstrom, Timothy

    2007-01-01

    BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.

  11. A data reduction technique and associated computer program for obtaining vehicle attitudes with a single onboard camera

    NASA Technical Reports Server (NTRS)

    Bendura, R. J.; Renfroe, P. G.

    1974-01-01

    A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.

  12. Analysis of calibration accuracy of cameras with different target sizes for large field of view

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan

    2018-03-01

    Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.

  13. InGaAs focal plane arrays for low-light-level SWIR imaging

    NASA Astrophysics Data System (ADS)

    MacDougal, Michael; Hood, Andrew; Geske, Jon; Wang, Jim; Patel, Falgun; Follman, David; Manzo, Juan; Getty, Jonathan

    2011-06-01

    Aerius Photonics will present their latest developments in large InGaAs focal plane arrays, which are used for low light level imaging in the short wavelength infrared (SWIR) regime. Aerius will present imaging in both 1280x1024 and 640x512 formats. Aerius will present characterization of the FPA including dark current measurements. Aerius will also show the results of development of SWIR FPAs for high temperaures, including imagery and dark current data. Finally, Aerius will show results of using the SWIR camera with Aerius' SWIR illuminators using VCSEL technology.

  14. LAMOST CCD camera-control system based on RTS2

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  15. STS-37 Pilot Cameron and MS Godwin work on OV-104's aft flight deck

    NASA Image and Video Library

    1991-04-11

    STS037-33-031 (5-11 April 1991) --- Astronauts Kenneth D. Cameron, STS-37 pilot, and Linda M. Godwin, mission specialist, take advantage of a well-lighted crew cabin to pose for an in-space portrait on the Space Shuttle Atlantis' aft flight deck. The two shared duties controlling the Remote Manipulator System (RMS) during operations involving the release of the Gamma Ray Observatory (GRO) and the Extravehicular Activity (EVA) of astronauts Jerry L. Ross and Jerome (Jay) Apt. The overhead window seen here and nearby eye-level windows (out of frame at left) are in a busy location on Shuttle missions, as they are used for payload surveys, Earth observation operations, astronomical studies and other purposes. Note the temporarily stowed large format still photo camera at lower right corner. This photo was made with a 35mm camera. This was one of the visuals used by the crew members during their April 19 Post Flight Press Conference (PFPC) at the Johnson Space Center (JSC).

  16. Megapixel mythology and photospace: estimating photospace for camera phones from large image sets

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror O.; Hertel, Dirk W.

    2008-01-01

    It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.

  17. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  18. Broadband Terahertz Computed Tomography Using a 5k-pixel Real-time THz Camera

    NASA Astrophysics Data System (ADS)

    Trichopoulos, Georgios C.; Sertel, Kubilay

    2015-07-01

    We present a novel THz computed tomography system that enables fast 3-dimensional imaging and spectroscopy in the 0.6-1.2 THz band. The system is based on a new real-time broadband THz camera that enables rapid acquisition of multiple cross-sectional images required in computed tomography. Tomographic reconstruction is achieved using digital images from the densely-packed large-format (80×64) focal plane array sensor located behind a hyper-hemispherical silicon lens. Each pixel of the sensor array consists of an 85 μm × 92 μm lithographically fabricated wideband dual-slot antenna, monolithically integrated with an ultra-fast diode tuned to operate in the 0.6-1.2 THz regime. Concurrently, optimum impedance matching was implemented for maximum pixel sensitivity, enabling 5 frames-per-second image acquisition speed. As such, the THz computed tomography system generates diffraction-limited resolution cross-section images as well as the three-dimensional models of various opaque and partially transparent objects. As an example, an over-the-counter vitamin supplement pill is imaged and its material composition is reconstructed. The new THz camera enables, for the first time, a practical application of THz computed tomography for non-destructive evaluation and biomedical imaging.

  19. Developments of a multi-wavelength spectro-polarimeter on the Domeless Solar Telescope at Hida Observatory

    NASA Astrophysics Data System (ADS)

    Anan, Tetsu; Huang, Yu-Wei; Nakatani, Yoshikazu; Ichimoto, Kiyoshi; UeNo, Satoru; Kimura, Goichi; Ninomiya, Shota; Okada, Sanetaka; Kaneda, Naoki

    2018-05-01

    To obtain full Stokes spectra in multi-wavelength windows simultaneously, we developed a new spectro-polarimeter on the Domeless Solar Telescope at Hida Observatory. The new polarimeter consists of a 60 cm aperture vacuum telescope on an altazimuth mounting, an image rotator, a high-dispersion spectrograph, and a polarization modulator and an analyzer composed of a continuously rotating waveplate with a retardation that is nearly constant at around 127° in 500-1100 nm. There are also a polarizing beam splitter located close behind the focus of the telescope, fast and large format CMOS cameras, and an infrared camera. A slit spectrograph allows us to obtain spectra in as many wavelength windows as the number of cameras. We characterized the instrumental polarization of the entire system and established a polarization calibration procedure. The cross-talks among the Stokes Q, U, and V have been evaluated to be about 0.06%-1.2%, depending on the degree of the intrinsic polarizations. In a typical observing setup, a sensitivity of 0.03% can be achieved in 20-60 seconds for 500-1100 nm. The new polarimeter is expected to provide a powerful tool for diagnosing the 3D magnetic field and other vector physical quantities in the solar atmosphere.

  20. Use of Vertical Aerial Images for Semi-Oblique Mapping

    NASA Astrophysics Data System (ADS)

    Poli, D.; Moe, K.; Legat, K.; Toschi, I.; Lago, F.; Remondino, F.

    2017-05-01

    The paper proposes a methodology for the use of the oblique sections of images from large-format photogrammetric cameras, by exploiting the effect of the central perspective geometry in the lateral parts of the nadir images ("semi-oblique" images). The point of origin of the investigation was the execution of a photogrammetric flight over Norcia (Italy), which was seriously damaged after the earthquake of 30/10/2016. Contrary to the original plan of oblique acquisitions, the flight was executed on 15/11/2017 using an UltraCam Eagle camera with focal length 80 mm, and combining two flight plans, rotated by 90º ("crisscross" flight). The images (GSD 5 cm) were used to extract a 2.5D DSM cloud, sampled to a XY-grid size of 2 GSD, a 3D point clouds with a mean spatial resolution of 1 GSD and a 3D mesh model at a resolution of 10 cm of the historic centre of Norcia for a quantitative assessment of the damages. From the acquired nadir images the "semi-oblique" images (forward, backward, left and right views) could be extracted and processed in a modified version of GEOBLY software for measurements and restitution purposes. The potential of such semi-oblique image acquisitions from nadir-view cameras is hereafter shown and commented.

  1. Magnetic Lateral Flow Strip for the Detection of Cocaine in Urine by Naked Eyes and Smart Phone Camera.

    PubMed

    Wu, Jing; Dong, Mingling; Zhang, Cheng; Wang, Yu; Xie, Mengxia; Chen, Yiping

    2017-06-05

    Magnetic lateral flow strip (MLFS) based on magnetic bead (MB) and smart phone camera has been developed for quantitative detection of cocaine (CC) in urine samples. CC and CC-bovine serum albumin (CC-BSA) could competitively react with MB-antibody (MB-Ab) of CC on the surface of test line of MLFS. The color of MB-Ab conjugate on the test line relates to the concentration of target in the competition immunoassay format, which can be used as a visual signal. Furthermore, the color density of the MB-Ab conjugate can be transferred into digital signal (gray value) by a smart phone, which can be used as a quantitative signal. The linear detection range for CC is 5-500 ng/mL and the relative standard deviations are under 10%. The visual limit of detection was 5 ng/mL and the whole analysis time was within 10 min. The MLFS has been successfully employed for the detection of CC in urine samples without sample pre-treatment and the result is also agreed to that of enzyme-linked immunosorbent assay (ELISA). With the popularization of smart phone cameras, the MLFS has large potential in the detection of drug residues in virtue of its stability, speediness, and low-cost.

  2. Are camera surveys useful for assessing recruitment in white-tailed deer?

    Treesearch

    M. Colter Chitwood; Marcus A. Lashley; John C. Kilgo; Michael J. Cherry; L. Mike Conner; Mark Vukovich; H. Scott Ray; Charles Ruth; Robert J. Warren; Christopher S. DePerno; Christopher E. Moorman

    2017-01-01

    Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter...

  3. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  4. Investigating Image Formation with a Camera Obscura: a Study in Initial Primary Science Teacher Education

    NASA Astrophysics Data System (ADS)

    Muñoz-Franco, Granada; Criado, Ana María; García-Carmona, Antonio

    2018-04-01

    This article presents the results of a qualitative study aimed at determining the effectiveness of the camera obscura as a didactic tool to understand image formation (i.e., how it is possible to see objects and how their image is formed on the retina, and what the image formed on the retina is like compared to the object observed) in a context of scientific inquiry. The study involved 104 prospective primary teachers (PPTs) who were being trained in science teaching. To assess the effectiveness of this tool, an open questionnaire was applied before (pre-test) and after (post-test) the educational intervention. The data were analyzed by combining methods of inter- and intra-rater analysis. The results showed that more than half of the PPTs advanced in their ideas towards the desirable level of knowledge in relation to the phenomena studied. The conclusion reached is that the camera obscura, used in a context of scientific inquiry, is a useful tool for PPTs to improve their knowledge about image formation and experience in the first person an authentic scientific inquiry during their teacher training.

  5. Bringing the Digital Camera to the Physics Lab

    ERIC Educational Resources Information Center

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-01-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…

  6. Multiple views of the October 2003 Cedar Fires captured by the High Performance Wireless Research and Education Network

    NASA Astrophysics Data System (ADS)

    Morikawa, E.; Nayak, A.; Vernon, F.; Braun, H.; Matthews, J.

    2004-12-01

    Late October 2003 brought devastating fires to the entire Southern California region. The NSF-funded High Performance Wireless Research and Education Network (HPWREN - http://hpwren.ucsd.edu/) cameras captured the development and progress of the Cedar fire in San Diego County. Cameras on Mt. Laguna, Mt. Woodson, Ramona Airport, and North Peak, recording one frame every 12 seconds, allowed for a time-lapse composite showing the fire's formation and progress from its beginnings on October 26th, to October 30th. The time-lapse camera footage depicts gushing smoke formations during the day, and bright orange walls of fire at night. The final video includes time synchronized views from multiple cameras, and an animated map highlighting the progress of the fire over time, and a directional indicator for each of the displaying cameras. The video is narrated by the California Department of Forestry and Fire Protection Fire Captain Ron Serabia (retd.) who was working then as a Air Tactical Group Supervisor with the aerial assault on the Cedar Fire Sunday October 26, 2004. The movie will be made available for download from the Scripps Institution of Oceanography Visualization Center Visual Objects library (supported by the OptIPuter project) at http://www.siovizcenter.ucsd.edu.

  7. Dense grid of narrow bandpass filters for the JST/T250 telescope: summary of results

    NASA Astrophysics Data System (ADS)

    Brauneck, Ulf; Sprengard, Ruediger; Bourquin, Sebastien; Marín-Franch, Antonio

    2018-01-01

    On the Javalambre mountain in Spain, the Centro de Estudios de Fisica del Cosmos de Aragon has setup two telescopes, the JST/T250 and the JAST/T80. The JAST/T80 telescope integrates T80Cam, a large format, single CCD camera while the JST/T250 will mount the JPCam instrument, a 1.2Gpix camera equipped with a 14-CCD mosaic using the new large format e2v 9.2k×9.2k 10-μm pixel detectors. Both T80Cam and JPCam integrate a large number of filters in dimensions of 106.8×106.8 mm2 and 101.7×95.5 mm2, respectively. For this instrument, SCHOTT manufactured 56 specially designed steep edged bandpass interference filters, which were recently completed. The filter set consists of bandpass filters in the range between 348.5 and 910 nm and a longpass filter at 915 nm. Most of the filters have full-width at half-maximum (FWHM) of 14.5 nm and a blocking between 250 and 1050 nm with optical density of OD5. Absorptive color glass substrates in combination with interference filters were used to minimize residual reflection in order to avoid ghost images. In spite of containing absorptive elements, the filters show the maximum possible transmission. This was achieved by using magnetron sputtering for the filter coating process. The most important requirement for the continuous photometric survey is the tight tolerancing of the central wavelengths and FWHM of the filters. This insures each bandpass has a defined overlap with its neighbors. A high image quality required a low transmitted wavefront error (<λ/4 locally and <λ/2 on the whole aperture), which was achieved even by combining two or three substrates. We report on the spectral and interferometric results measured on the whole set of filters.

  8. Star formation in the outskirts of DDO 154: a top-light IMF in a nearly dormant disc

    NASA Astrophysics Data System (ADS)

    Watts, Adam B.; Meurer, Gerhardt R.; Lagos, Claudia D. P.; Bruzzese, Sarah M.; Kroupa, Pavel; Jerabkova, Tereza

    2018-07-01

    We present optical photometry of Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS)/Wide Field Camera (WFC) data of the resolved stellar populations in the outer disc of the dwarf irregular galaxy DDO 154. The photometry reveals that young main sequence (MS) stars are almost absent from the outermost H I disc. Instead, most are clustered near the main stellar component of the galaxy. We constrain the stellar initial mass function (IMF) by comparing the luminosity function of the MS stars to simulated stellar populations, assuming a constant star formation rate over the dynamical time-scale. The best-fitting IMF is deficient in high-mass stars compared to a canonical Kroupa IMF, with a best-fitting slope α = -2.45 and upper mass limit MU = 16 M⊙. This top-light IMF is consistent with predictions of the integrated galactic IMF theory. Combining the HST images with H I data from The H I Nearby Galaxy Survey (THINGS), we determine the star formation law (SFL) in the outer disc. The fit has a power-law exponent N = 2.92 ± 0.22 and zero-point A = 4.47 ± 0.65 × 10-7 M⊙ yr-1 kpc-2. This is depressed compared to the Kennicutt-Schmidt SFL, but consistent with weak star formation observed in diffuse H I environments. Extrapolating the SFL over the outer disc implies that there could be significant star formation occurring that is not detectable in H α. Last, we determine the Toomre stability parameter Q of the outer disc of DDO 154 using the THINGS H I rotation curve and velocity dispersion map. 72 per cent of the H I in our field has Q ≤ 4 and this incorporates 96 per cent of the observed MS stars. Hence, 28 per cent of the H I in the field is largely dormant.

  9. Tethys Eyes Saturn

    NASA Image and Video Library

    2015-06-15

    The two large craters on Tethys, near the line where day fades to night, almost resemble two giant eyes observing Saturn. The location of these craters on Tethys' terminator throws their topography into sharp relief. Both are large craters, but the larger and southernmost of the two shows a more complex structure. The angle of the lighting highlights a central peak in this crater. Central peaks are the result of the surface reacting to the violent post-impact excavation of the crater. The northern crater does not show a similar feature. Possibly the impact was too small to form a central peak, or the composition of the material in the immediate vicinity couldn't support the formation of a central peak. In this image Tethys is significantly closer to the camera, while the planet is in the background. Yet the moon is still utterly dwarfed by the giant Saturn. This view looks toward the anti-Saturn side of Tethys. North on Tethys is up and rotated 42 degrees to the right. The image was taken in visible light with the Cassini spacecraft wide-angle camera on April 11, 2015. The view was obtained at a distance of approximately 75,000 miles (120,000 kilometers) from Tethys. Image scale at Tethys is 4 miles (7 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/pia18318

  10. Minimum Requirements for Taxicab Security Cameras.

    PubMed

    Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene

    2014-07-01

    The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.

  11. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    NASA Astrophysics Data System (ADS)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  12. Formations in Context (or, what is it?)

    NASA Image and Video Library

    2018-04-02

    This image from NASA's Mars Reconnaissance Orbiter is a close-up of a trough, along with channels draining into the depression. Some HiRISE images show strange-looking formations. Sometimes it helps to look at Context Camera images to understand the circumstances of a scene -- like this cutout from CTX 033783_1509 -- which here shows an impact crater with a central peak, and a collapse depression with concentric troughs just north of that peak. On the floor of the trough is some grooved material that we typically see in middle latitude regions where there has been glacial flow. These depressions with concentric troughs exist elsewhere on Mars, and their origins remain a matter of debate. NB: The Context Camera is another instrument onboard MRO, and it has a larger viewing angle than HiRISE, but less resolution capability than our camera. https://photojournal.jpl.nasa.gov/catalog/PIA22348

  13. Lunar Reconnaissance Orbiter Data Enable Science and Terrain Analysis of Potential Landing Sites in South Pole-Aitken Basin

    NASA Astrophysics Data System (ADS)

    Jolliff, B. L.

    2017-12-01

    Exploring the South Pole-Aitken basin (SPA), one of the key unsampled geologic terranes on the Moon, is a high priority for Solar System science. As the largest and oldest recognizable impact basin on the Moon, it anchors the heavy bombardment chronology. It is thus a key target for sample return to better understand the impact flux in the Solar System between formation of the Moon and 3.9 Ga when Imbrium, one of the last of the great lunar impact basins, formed. Exploration of SPA has implications for understanding early habitable environments on the terrestrial planets. Global mineralogical and compositional data exist from the Clementine UV-VIS camera, the Lunar Prospector Gamma Ray Spectrometer, the Moon Mineralogy Mapper (M3) on Chandrayaan-1, the Chang'E-1 Imaging Interferometer, the spectral suite on SELENE, and the Lunar Reconnaissance Orbiter Cameras (LROC) Wide Angle Camera (WAC) and Diviner thermal radiometer. Integration of data sets enables synergistic assessment of geology and distribution of units across multiple spatial scales. Mineralogical assessment using hyperspectral data indicates spatial relationships with mineralogical signatures, e.g., central peaks of complex craters, consistent with inferred SPA basin structure and melt differentiation (Moriarty & Pieters, 2015, JGR-P 118). Delineation of mare, cryptomare, and nonmare surfaces is key to interpreting compositional mixing in the formation of SPA regolith to interpret remotely sensed data, and for scientific assessment of landing sites. LROC Narrow Angle Camera (NAC) images show the location and distribution of >0.5 m boulders and fresh craters that constitute the main threats to automated landers and thus provide critical information for landing site assessment and planning. NAC images suitable for geometric stereo derivation and digital terrain models so derived, controlled with Lunar Orbiter Laser Altimeter (LOLA) data, and oblique NAC images made with large slews of the spacecraft, are crucial to both scientific and landing-site assessments. These images, however, require favorable illumination and significant spacecraft resources. Thus they make up only a small percentage of all of the images taken. It is essential for future exploration to support LRO continued operation for these critical datasets.

  14. Large-Scale High-Resolution Cylinder Wake Measurements in a Wind Tunnel using Tomographic PIV with sCMOS Cameras

    NASA Astrophysics Data System (ADS)

    Michaelis, Dirk; Schroeder, Andreas

    2012-11-01

    Tomographic PIV has triggered vivid activity, reflected in a large number of publications, covering both: development of the technique and a wide range of fluid dynamic experiments. Maturing of tomo PIV allows the application in medium to large scale wind tunnels. Limiting factor for wind tunnel application is the small size of the measurement volume, being typically about of 50 × 50 × 15 mm3. Aim of this study is the optimization towards large measurement volumes and high spatial resolution performing cylinder wake measurements in a 1 meter wind tunnel. Main limiting factors for the volume size are the laser power and the camera sensitivity. So, a high power laser with 800 mJ per pulse is used together with low noise sCMOS cameras, mounted in forward scattering direction to gain intensity due to the Mie scattering characteristics. A mirror is used to bounce the light back, to have all cameras in forward scattering. Achievable particle density is growing with number of cameras, so eight cameras are used for a high spatial resolution. Optimizations lead to volume size of 230 × 200 × 52 mm3 = 2392 cm3, more than 60 times larger than previously. 281 × 323 × 68 vectors are calculated with spacing of 0.76 mm. The achieved measurement volume size and spatial resolution is regarded as a major step forward in the application of tomo PIV in wind tunnels. Supported by EU-project: no. 265695.

  15. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  16. Radiometric calibration of an ultra-compact microbolometer thermal imaging module

    NASA Astrophysics Data System (ADS)

    Riesland, David W.; Nugent, Paul W.; Laurie, Seth; Shaw, Joseph A.

    2017-05-01

    As microbolometer focal plane array formats are steadily decreasing, new challenges arise in correcting for thermal drift in the calibration coefficients. As the thermal mass of the cameras decrease the focal plane becomes more sensitive to external thermal inputs. This paper shows results from a temperature compensation algorithm for characterizing and radiometrically calibrating a FLIR Lepton camera.

  17. Bringing the Digital Camera to the Physics Lab

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-03-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as we examine in this work.

  18. AMICA (Antarctic Multiband Infrared CAmera) project

    NASA Astrophysics Data System (ADS)

    Dolci, Mauro; Straniero, Oscar; Valentini, Gaetano; Di Rico, Gianluca; Ragni, Maurizio; Pelusi, Danilo; Di Varano, Igor; Giuliani, Croce; Di Cianno, Amico; Valentini, Angelo; Corcione, Leonardo; Bortoletto, Favio; D'Alessandro, Maurizio; Bonoli, Carlotta; Giro, Enrico; Fantinel, Daniela; Magrin, Demetrio; Zerbi, Filippo M.; Riva, Alberto; Molinari, Emilio; Conconi, Paolo; De Caprio, Vincenzo; Busso, Maurizio; Tosti, Gino; Nucciarelli, Giuliano; Roncella, Fabio; Abia, Carlos

    2006-06-01

    The Antarctic Plateau offers unique opportunities for ground-based Infrared Astronomy. AMICA (Antarctic Multiband Infrared CAmera) is an instrument designed to perform astronomical imaging from Dome-C in the near- (1 - 5 μm) and mid- (5 - 27 μm) infrared wavelength regions. The camera consists of two channels, equipped with a Raytheon InSb 256 array detector and a DRS MF-128 Si:As IBC array detector, cryocooled at 35 and 7 K respectively. Cryogenic devices will move a filter wheel and a sliding mirror, used to feed alternatively the two detectors. Fast control and readout, synchronized with the chopping secondary mirror of the telescope, will be required because of the large background expected at these wavelengths, especially beyond 10 μm. An environmental control system is needed to ensure the correct start-up, shut-down and housekeeping of the camera. The main technical challenge is represented by the extreme environmental conditions of Dome C (T about -90 °C, p around 640 mbar) and the need for a complete automatization of the overall system. AMICA will be mounted at the Nasmyth focus of the 80 cm IRAIT telescope and will perform survey-mode automatic observations of selected regions of the Southern sky. The first goal will be a direct estimate of the observational quality of this new highly promising site for Infrared Astronomy. In addition, IRAIT, equipped with AMICA, is expected to provide a significant improvement in the knowledge of fundamental astrophysical processes, such as the late stages of stellar evolution (especially AGB and post-AGB stars) and the star formation.

  19. Geometrical calibration television measuring systems with solid state photodetectors

    NASA Astrophysics Data System (ADS)

    Matiouchenko, V. G.; Strakhov, V. V.; Zhirkov, A. O.

    2000-11-01

    The various optical measuring methods for deriving information about the size and form of objects are now used in difference branches- mechanical engineering, medicine, art, criminalistics. Measuring by means of the digital television systems is one of these methods. The development of this direction is promoted by occurrence on the market of various types and costs small-sized television cameras and frame grabbers. There are many television measuring systems using the expensive cameras, but accuracy performances of low cost cameras are also interested for the system developers. For this reason inexpensive mountingless camera SK1004CP (format 1/3', cost up to 40$) and frame grabber Aver2000 were used in experiments.

  20. Cloud formation over South America - fifth orbit pass

    NASA Image and Video Library

    1962-10-03

    S62-06612 (3 Oct. 1962) --- Cloud formation over South America taken during the fifth orbit pass of the Mercury-Atlas 8 (MA-8) mission by astronaut Walter M. Schirra Jr. with a hand-held camera. Photo credit: NASA

  1. An interactive web-based system using cloud for large-scale visual analytics

    NASA Astrophysics Data System (ADS)

    Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.

  2. New technologies for HWIL testing of WFOV, large-format FPA sensor systems

    NASA Astrophysics Data System (ADS)

    Fink, Christopher

    2016-05-01

    Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author's team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.

  3. Southeastern Mediterranean Panorama

    NASA Image and Video Library

    1991-06-14

    STS040-152-180 (5-24 June 1991) --- The Sinai Peninsula dominates this north-looking, oblique view. According to NASA photo experts studying the STS 40 imagery, the Red Sea in the foreground is clear of river sediment because of the prevailing dry climate of the Middle East. The great rift of the Gulf of Aqaba extends northward to Turkey (top right) through the Dead Sea. The international boundary between Israel and Egypt, reflecting different rural landscapes, stands out clearly. The Nile River runs through the frame. NASA photo experts believe the haze over the Mediterranean to be wind-borne dust. The photo was taken with an Aero-Linhof large format camera.

  4. Upper Texas Gulf Coast, USA

    NASA Image and Video Library

    1989-05-08

    STS030-152-066 (4-8 May 1989) --- The upper Texas and Louisiana Gulf Coast area was clearly represented in this large format frame photographed by the astronaut crew of the Earth-orbiting Space Shuttle Atlantis. The area covered stretches almost 300 miles from Aransas Pass, Texas to Cameron, Louisiana. The sharp detail of both the natural and cultural features noted throughout the scene is especially evident in the Houston area, where highways, major streets, airport runways and even some neighborhood lanes are easily seen. Other major areas seen are Austin, San Antonio and the Golden Triangle. An Aero Linhof camera was used to expose the frame.

  5. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    PubMed

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  6. Students' Framing of Laboratory Exercises Using Infrared Cameras

    ERIC Educational Resources Information Center

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…

  7. Electrostatic camera system functional design study

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Cook, F. J.; Moore, R. F.

    1972-01-01

    A functional design study for an electrostatic camera system for application to planetary missions is presented. The electrostatic camera can produce and store a large number of pictures and provide for transmission of the stored information at arbitrary times after exposure. Preliminary configuration drawings and circuit diagrams for the system are illustrated. The camera system's size, weight, power consumption, and performance are characterized. Tradeoffs between system weight, power, and storage capacity are identified.

  8. VizieR Online Data Catalog: PHAT. XIX. Formation history of M31 disk (Williams+, 2017)

    NASA Astrophysics Data System (ADS)

    Williams, B. F.; Dolphin, A. E.; Dalcanton, J. J.; Weisz, D. R.; Bell, E. F.; Lewis, A. R.; Rosenfield, P.; Choi, Y.; Skillman, E.; Monachesi, A.

    2018-05-01

    The data for this study come from the Panchromatic Hubble Andromeda Treasury (PHAT) survey (Dalcanton+ 2012ApJS..200...18D ; Williams+ 2014, J/ApJS/215/9). Briefly, PHAT is a multiwavelength HST survey mapping 414 contiguous HST fields of the northern M31 disk and bulge in six broad wavelength bands from the near-ultraviolet to the near-infrared. The survey obtained data in the F275W and F336W bands with the UVIS detectors of the Wide-Field Camera 3 (WFC3) camera, the F475W and F814W bands in the WFC detectors of the Advanced Camera for Surveys (ACS) camera, and the F110W and F160W bands in the IR detectors of the WFC3 camera. (4 data files).

  9. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  10. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  11. Are camera surveys useful for assessing recruitment in white-tailed deer?

    DOE PAGES

    Chitwood, M. Colter; Lashley, Marcus A.; Kilgo, John C.; ...

    2016-12-27

    Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter in ungulate population dynamics, there is a growing need to test the effectiveness of camera surveys for assessing fawn recruitment. At Savannah River Site, South Carolina, we used six years of camera-based recruitment estimates (i.e. fawn:doe ratio) to predict concurrently collected annual radiotag-based survival estimates. The coefficientmore » of determination (R) was 0.445, indicating some support for the viability of cameras to reflect recruitment. Here, we added two years of data from Fort Bragg Military Installation, North Carolina, which improved R to 0.621 without accounting for site-specific variability. Also, we evaluated the correlation between year-to-year changes in recruitment and survival using the Savannah River Site data; R was 0.758, suggesting that camera-based recruitment could be useful as an indicator of the trend in survival. Because so few researchers concurrently estimate survival and camera-based recruitment, examining this relationship at larger spatial scales while controlling for numerous confounding variables remains difficult. We believe that future research should test the validity of our results from other areas with varying deer and camera densities, as site (e.g. presence of feral pigs Sus scrofa) and demographic (e.g. fawn age at time of camera survey) parameters may have a large influence on detectability. Until such biases are fully quantified, we urge researchers and managers to use caution when advocating the use of camera-based recruitment estimates.« less

  12. Genetic mechanisms involved in the evolution of the cephalopod camera eye revealed by transcriptomic and developmental studies

    PubMed Central

    2011-01-01

    Background Coleoid cephalopods (squids and octopuses) have evolved a camera eye, the structure of which is very similar to that found in vertebrates and which is considered a classic example of convergent evolution. Other molluscs, however, possess mirror, pin-hole, or compound eyes, all of which differ from the camera eye in the degree of complexity of the eye structures and neurons participating in the visual circuit. Therefore, genes expressed in the cephalopod eye after divergence from the common molluscan ancestor could be involved in eye evolution through association with the acquisition of new structural components. To clarify the genetic mechanisms that contributed to the evolution of the cephalopod camera eye, we applied comprehensive transcriptomic analysis and conducted developmental validation of candidate genes involved in coleoid cephalopod eye evolution. Results We compared gene expression in the eyes of 6 molluscan (3 cephalopod and 3 non-cephalopod) species and selected 5,707 genes as cephalopod camera eye-specific candidate genes on the basis of homology searches against 3 molluscan species without camera eyes. First, we confirmed the expression of these 5,707 genes in the cephalopod camera eye formation processes by developmental array analysis. Second, using molecular evolutionary (dN/dS) analysis to detect positive selection in the cephalopod lineage, we identified 156 of these genes in which functions appeared to have changed after the divergence of cephalopods from the molluscan ancestor and which contributed to structural and functional diversification. Third, we selected 1,571 genes, expressed in the camera eyes of both cephalopods and vertebrates, which could have independently acquired a function related to eye development at the expression level. Finally, as experimental validation, we identified three functionally novel cephalopod camera eye genes related to optic lobe formation in cephalopods by in situ hybridization analysis of embryonic pygmy squid. Conclusion We identified 156 genes positively selected in the cephalopod lineage and 1,571 genes commonly found in the cephalopod and vertebrate camera eyes from the analysis of cephalopod camera eye specificity at the expression level. Experimental validation showed that the cephalopod camera eye-specific candidate genes include those expressed in the outer part of the optic lobes, which unique to coleoid cephalopods. The results of this study suggest that changes in gene expression and in the primary structure of proteins (through positive selection) from those in the common molluscan ancestor could have contributed, at least in part, to cephalopod camera eye acquisition. PMID:21702923

  13. Visibility of children behind 2010-2013 model year passenger vehicles using glances, mirrors, and backup cameras and parking sensors.

    PubMed

    Kidd, David G; Brethwaite, Andrew

    2014-05-01

    This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Mars Global Digital Dune Database: MC2-MC29

    USGS Publications Warehouse

    Hayward, Rosalyn K.; Mullins, Kevin F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, Anthony; Christensen, P.R.

    2007-01-01

    Introduction The Mars Global Digital Dune Database presents data and describes the methodology used in creating the database. The database provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields from 65? N to 65? S latitude and encompasses ~ 550 dune fields. The database will be expanded to cover the entire planet in later versions. Although we have attempted to include all dune fields between 65? N and 65? S, some have likely been excluded for two reasons: 1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or 2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera narrow angle (MOC NA) images allowed, we classifed dunes and included dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid was calculated. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes over 1800 selected Thermal Emission Imaging System (THEMIS) infrared (IR), THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as a series of ArcReader projects which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in ArcMap projects. The ArcMap projects allow fuller use of the data, but require ESRI ArcMap? software. Multiple projects were required to accommodate the large number of images needed. A fuller description of the projects can be found in the Dunes_ReadMe file and the ReadMe_GIS file in the Documentation folder. For users who prefer to create their own projects, the data is available in ESRI shapefile and geodatabase formats, as well as the open Geographic Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. ReadMe files are available in PDF and ASCII (.txt) files. Tables are available in both Excel (.xls) and ASCII formats.

  15. Can camera traps monitor Komodo dragons a large ectothermic predator?

    PubMed

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.

  16. Can Camera Traps Monitor Komodo Dragons a Large Ectothermic Predator?

    PubMed Central

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S.

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site*survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species. PMID:23527027

  17. Minimum Requirements for Taxicab Security Cameras*

    PubMed Central

    Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene

    2015-01-01

    Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992

  18. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  19. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  20. Instrument Performance of GISMO, a 2 Millimeter TES Bolometer Camera used at the IRAM 30 m Telescope

    NASA Technical Reports Server (NTRS)

    Staguhn, Johannes

    2008-01-01

    In November of 2007 we demonstrated a monolithic Backshort-Under-Grid (BUG) 8x16 array in the field using our 2 mm wavelength imager GISMO (Goddard IRAM Superconducting 2 Millimeter Observer) at the IRAM 30 m telescope in Spain for astronomical observations. The 2 mm spectral range provides a unique terrestrial window enabling ground-based observations of the earliest active dusty galaxies in the universe and thereby allowing a better constraint on the star formation rate in these objects. The optical design incorporates a 100 mm diameter silicon lens cooled to 4 K, which provides the required fast beam yielding 0.9 lambda/D pixels. With this spatial sampling, GISMO will be very efficient at detecting sources serendipitously in large sky surveys, while the capability for diffraction limited imaging is preserved. The camera provides significantly greater detection sensitivity and mapping speed at this wavelength than has previously been possible. The instrument will fill in the spectral energy distribution of high redshift galaxies at the Rayleigh-Jeans part of the dust emission spectrum, even at the highest redshifts. Here1 will we present early results from our observing run with the first fielded BUG bolometer array. We have developed key technologies to enable highly versatile, kilopixel, infrared through millimeter wavelength bolometer arrays. The Backshort-Under-Grid (BUG) array consists of three components: 1) a transition-edge-sensor (TES) based bolometer array with background-limited sensitivity and high filling factor, 2) a quarter-wave reflective backshort grid providing high optical efficiency, and 3) a superconducting bump-bonded large format Superconducting Quantum Interference Device (SQUID) multiplexer readout. The array is described in more detail elsewhere (Allen et al., this conference). In November of 2007 we demonstrated a monolithic 8x 16 array with 2 mm-pitch detectors in the field using our 2 mm wavelength imager GISMO (Goddard IRAM Superconducting 2 Millimeter Observer) at the IRAM 30 m telescope in Spain for astronomical observations. The 2 mm spectral range provides a unique terrestrial window enabling ground-based observations of the earliest active dusty galaxies in the universe and thereby allowing a better constraint on the star formation rate in these objects. The optical design incorporates a 100 mm diameter silicon lens cooled to 4 K, which provides the required fast beam yielding 0.9 lambda1D pixels. With this spatial sampling, GISMO will be very efficient at detecting sources serendipitously in large sky surveys, while the capability for diffraction limited imaging is preserved. The camera provides significantly greater detection sensitivity and mapping speed at this wavelength than has previously been possible. The instrument will fill in the spectral energy distribution of high redshift galaxies at the Rayleigh-Jeans part of the dust emission spectrum, even at the highest redshifts. Here I will we present early results from our observing run with the first fielded BUG bolometer array.

  1. Influence of camera parameters on the quality of mobile 3D capture

    NASA Astrophysics Data System (ADS)

    Georgiev, Mihail; Boev, Atanas; Gotchev, Atanas; Hannuksela, Miska

    2010-01-01

    We investigate the effect of camera de-calibration on the quality of depth estimation. Dense depth map is a format particularly suitable for mobile 3D capture (scalable and screen independent). However, in real-world scenario cameras might move (vibrations, temp. bend) form their designated positions. For experiments, we create a test framework, described in the paper. We investigate how mechanical changes will affect different (4) stereo-matching algorithms. We also assess how different geometric corrections (none, motion compensation-like, full rectification) will affect the estimation quality (how much offset can be still compensated with "crop" over a larger CCD). Finally, we show how estimated camera pose change (E) relates with stereo-matching, which can be used for "rectification quality" measure.

  2. Development of a Compton camera for safeguards applications in a pyroprocessing facility

    NASA Astrophysics Data System (ADS)

    Park, Jin Hyung; Kim, Young Su; Kim, Chan Hyeong; Seo, Hee; Park, Se-Hwan; Kim, Ho-Dong

    2014-11-01

    The Compton camera has a potential to be used for localizing nuclear materials in a large pyroprocessing facility due to its unique Compton kinematics-based electronic collimation method. Our R&D group, KAERI, and Hanyang University have made an effort to develop a scintillation-detector-based large-area Compton camera for safeguards applications. In the present study, a series of Monte Carlo simulations was performed with Geant4 in order to examine the effect of the detector parameters and the feasibility of using a Compton camera to obtain an image of the nuclear material distribution. Based on the simulation study, experimental studies were performed to assess the possibility of Compton imaging in accordance with the type of the crystal. Two different types of Compton cameras were fabricated and tested with a pixelated type of LYSO (Ce) and a monolithic type of NaI(Tl). The conclusions of this study as a design rule for a large-area Compton camera can be summarized as follows: 1) The energy resolution, rather than position resolution, of the component detector was the limiting factor for the imaging resolution, 2) the Compton imaging system needs to be placed as close as possible to the source location, and 3) both pixelated and monolithic types of crystals can be utilized; however, the monolithic types, require a stochastic-method-based position-estimating algorithm for improving the position resolution.

  3. The Resolved Stellar Populations in the LEGUS Galaxies1

    NASA Astrophysics Data System (ADS)

    Sabbi, E.; Calzetti, D.; Ubeda, L.; Adamo, A.; Cignoni, M.; Thilker, D.; Aloisi, A.; Elmegreen, B. G.; Elmegreen, D. M.; Gouliermis, D. A.; Grebel, E. K.; Messa, M.; Smith, L. J.; Tosi, M.; Dolphin, A.; Andrews, J. E.; Ashworth, G.; Bright, S. N.; Brown, T. M.; Chandar, R.; Christian, C.; Clayton, G. C.; Cook, D. O.; Dale, D. A.; de Mink, S. E.; Dobbs, C.; Evans, A. S.; Fumagalli, M.; Gallagher, J. S., III; Grasha, K.; Herrero, A.; Hunter, D. A.; Johnson, K. E.; Kahre, L.; Kennicutt, R. C.; Kim, H.; Krumholz, M. R.; Lee, J. C.; Lennon, D.; Martin, C.; Nair, P.; Nota, A.; Östlin, G.; Pellerin, A.; Prieto, J.; Regan, M. W.; Ryon, J. E.; Sacchi, E.; Schaerer, D.; Schiminovich, D.; Shabani, F.; Van Dyk, S. D.; Walterbos, R.; Whitmore, B. C.; Wofford, A.

    2018-03-01

    The Legacy ExtraGalactic UV Survey (LEGUS) is a multiwavelength Cycle 21 Treasury program on the Hubble Space Telescope. It studied 50 nearby star-forming galaxies in 5 bands from the near-UV to the I-band, combining new Wide Field Camera 3 observations with archival Advanced Camera for Surveys data. LEGUS was designed to investigate how star formation occurs and develops on both small and large scales, and how it relates to the galactic environments. In this paper we present the photometric catalogs for all the apparently single stars identified in the 50 LEGUS galaxies. Photometric catalogs and mosaicked images for all filters are available for download. We present optical and near-UV color–magnitude diagrams for all the galaxies. For each galaxy we derived the distance from the tip of the red giant branch. We then used the NUV color–magnitude diagrams to identify stars more massive than 14 M ⊙, and compared their number with the number of massive stars expected from the GALEX FUV luminosity. Our analysis shows that the fraction of massive stars forming in star clusters and stellar associations is about constant with the star formation rate. This lack of a relation suggests that the timescale for evaporation of unbound structures is comparable or longer than 10 Myr. At low star formation rates this translates to an excess of mass in clustered environments as compared to model predictions of cluster evolution, suggesting that a significant fraction of stars form in unbound systems. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA Inc., under NASA contract NAS 5-26555.

  4. HUBBLE TARANTULA TREASURY PROJECT. III. PHOTOMETRIC CATALOG AND RESULTING CONSTRAINTS ON THE PROGRESSION OF STAR FORMATION IN THE 30 DORADUS REGION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabbi, E.; Anderson, J.; Cignoni, M.

    2016-01-15

    We present and describe the astro-photometric catalog of more than 800,000 sources found in the Hubble Tarantula Treasury Project (HTTP). HTTP is a Hubble Space Telescope Treasury program designed to image the entire 30 Doradus region down to the sub-solar (∼0.5 M{sub ⊙}) mass regime using the Wide Field Camera 3 and the Advanced Camera for Surveys. We observed 30 Doradus in the near-ultraviolet (F275W, F336W), optical (F555W, F658N, F775W), and near-infrared (F110W, F160W) wavelengths. The stellar photometry was measured using point-spread function fitting across all bands simultaneously. The relative astrometric accuracy of the catalog is 0.4 mas. The astro-photometric catalog,more » results from artificial star experiments, and the mosaics for all the filters are available for download. Color–magnitude diagrams are presented showing the spatial distributions and ages of stars within 30 Dor as well as in the surrounding fields. HTTP provides the first rich and statistically significant sample of intermediate- and low-mass pre-main sequence candidates and allows us to trace how star formation has been developing through the region. The depth and high spatial resolution of our analysis highlight the dual role of stellar feedback in quenching and triggering star formation on the giant H ii region scale. Our results are consistent with stellar sub-clustering in a partially filled gaseous nebula that is offset toward our side of the Large Magellanic Cloud.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitwood, M. Colter; Lashley, Marcus A.; Kilgo, John C.

    Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter in ungulate population dynamics, there is a growing need to test the effectiveness of camera surveys for assessing fawn recruitment. At Savannah River Site, South Carolina, we used six years of camera-based recruitment estimates (i.e. fawn:doe ratio) to predict concurrently collected annual radiotag-based survival estimates. The coefficientmore » of determination (R) was 0.445, indicating some support for the viability of cameras to reflect recruitment. Here, we added two years of data from Fort Bragg Military Installation, North Carolina, which improved R to 0.621 without accounting for site-specific variability. Also, we evaluated the correlation between year-to-year changes in recruitment and survival using the Savannah River Site data; R was 0.758, suggesting that camera-based recruitment could be useful as an indicator of the trend in survival. Because so few researchers concurrently estimate survival and camera-based recruitment, examining this relationship at larger spatial scales while controlling for numerous confounding variables remains difficult. We believe that future research should test the validity of our results from other areas with varying deer and camera densities, as site (e.g. presence of feral pigs Sus scrofa) and demographic (e.g. fawn age at time of camera survey) parameters may have a large influence on detectability. Until such biases are fully quantified, we urge researchers and managers to use caution when advocating the use of camera-based recruitment estimates.« less

  6. Integration and verification testing of the Large Synoptic Survey Telescope camera

    NASA Astrophysics Data System (ADS)

    Lange, Travis; Bond, Tim; Chiang, James; Gilmore, Kirk; Digel, Seth; Dubois, Richard; Glanzman, Tom; Johnson, Tony; Lopez, Margaux; Newbry, Scott P.; Nordby, Martin E.; Rasmussen, Andrew P.; Reil, Kevin A.; Roodman, Aaron J.

    2016-08-01

    We present an overview of the Integration and Verification Testing activities of the Large Synoptic Survey Telescope (LSST) Camera at the SLAC National Accelerator Lab (SLAC). The LSST Camera, the sole instrument for LSST and under construction now, is comprised of a 3.2 Giga-pixel imager and a three element corrector with a 3.5 degree diameter field of view. LSST Camera Integration and Test will be taking place over the next four years, with final delivery to the LSST observatory anticipated in early 2020. We outline the planning for Integration and Test, describe some of the key verification hardware systems being developed, and identify some of the more complicated assembly/integration activities. Specific details of integration and verification hardware systems will be discussed, highlighting some of the technical challenges anticipated.

  7. Toolkit for testing scientific CCD cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, Janusz; Mankiewicz, Lech; Molak, Marcin; Wrochna, Grzegorz

    2006-03-01

    The CCD Toolkit (1) is a software tool for testing CCD cameras which allows to measure important characteristics of a camera like readout noise, total gain, dark current, 'hot' pixels, useful area, etc. The application makes a statistical analysis of images saved in files with FITS format, commonly used in astronomy. A graphical interface is based on the ROOT package, which offers high functionality and flexibility. The program was developed in a way to ensure future compatibility with different operating systems: Windows and Linux. The CCD Toolkit was created for the "Pie of the Sky" project collaboration (2).

  8. Focal plane wavefront sensor achromatization: The multireference self-coherent camera

    NASA Astrophysics Data System (ADS)

    Delorme, J. R.; Galicher, R.; Baudoz, P.; Rousset, G.; Mazoyer, J.; Dupuis, O.

    2016-04-01

    Context. High contrast imaging and spectroscopy provide unique constraints for exoplanet formation models as well as for planetary atmosphere models. But this can be challenging because of the planet-to-star small angular separation (<1 arcsec) and high flux ratio (>105). Recently, optimized instruments like VLT/SPHERE and Gemini/GPI were installed on 8m-class telescopes. These will probe young gazeous exoplanets at large separations (≳1 au) but, because of uncalibrated phase and amplitude aberrations that induce speckles in the coronagraphic images, they are not able to detect older and fainter planets. Aims: There are always aberrations that are slowly evolving in time. They create quasi-static speckles that cannot be calibrated a posteriori with sufficient accuracy. An active correction of these speckles is thus needed to reach very high contrast levels (>106-107). This requires a focal plane wavefront sensor. Our team proposed a self coherent camera, the performance of which was demonstrated in the laboratory. As for all focal plane wavefront sensors, these are sensitive to chromatism and we propose an upgrade that mitigates the chromatism effects. Methods: First, we recall the principle of the self-coherent camera and we explain its limitations in polychromatic light. Then, we present and numerically study two upgrades to mitigate chromatism effects: the optical path difference method and the multireference self-coherent camera. Finally, we present laboratory tests of the latter solution. Results: We demonstrate in the laboratory that the multireference self-coherent camera can be used as a focal plane wavefront sensor in polychromatic light using an 80 nm bandwidth at 640 nm (bandwidth of 12.5%). We reach a performance that is close to the chromatic limitations of our bench: 1σ contrast of 4.5 × 10-8 between 5 and 17 λ0/D. Conclusions: The performance of the MRSCC is promising for future high-contrast imaging instruments that aim to actively minimize the speckle intensity so as to detect and spectrally characterize faint old or light gaseous planets.

  9. Large Format Si:As IBC Array Performance for NGST and Future IR Space Telescope Applications

    NASA Technical Reports Server (NTRS)

    Ennico, Kimberly; Johnson, Roy; Love, Peter; Lum, Nancy; McKelvey, Mark; McCreight, Craig; McMurray, Robert, Jr.; DeVincenzi, D. (Technical Monitor)

    2002-01-01

    A mid-IR (5-30micrometer) instrument aboard a cryogenic space telescope can have an enormous impact in resolving key questions in astronomy and cosmology. A space platform's greatly reduced thermal backgrounds (compared to airborne or ground-based platforms), allow for more sensitive observations of dusty young galaxies at high redshifts, star formation of solar-type stars in the local universe, and formation and evolution of planetary disks and systems. The previous generation's largest, in sensitive IR detectors at these wavelengths are 256x256 pixel Si:As Impurity Band Conduction (IBC) devices built by Raytheon Infrared Operations (RIO) for the Space Infrared Telescope Facility/Infrared Array Camera (SIRTF)/(IRAC) instrument. RIO has successfully enhanced these devices, increasing the pixel count by a factor of 16 while matching or exceeding SIRTF/IRAC device performance. NASA-ARC in collaboration with RIO has tested the first high performance large format (1024x 1024) Si:As IBC arrays for low background applications, such as for the middle instrument on Next Generation Space Telescope (NGST) and future IR Explorer missions. These hybrid devices consist of radiation hard SIRTF/IRAC-type Si:As IBC material mated to a readout multiplexer that has been specially processed for operation at low cryogenic temperatures (below 10K), yielding high device sensitivity over a wavelength range of 5-28 micrometers. We present laboratory testing results from these benchmark, devices. Continued development in this technology is essential for conducting large-area surveys of the local and early universe through observation and for complementing future missions such as NGST, Terrestrial Planet Finder (TPF), and Focal Plane Instruments and Requirement Science Team (FIRST).

  10. Space telescope optical telescope assembly/scientific instruments. Phase B: -Preliminary design and program definition study; Volume 2A: Planetary camera report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Development of the F/48, F/96 Planetary Camera for the Large Space Telescope is discussed. Instrument characteristics, optical design, and CCD camera submodule thermal design are considered along with structural subsystem and thermal control subsystem. Weight, electrical subsystem, and support equipment requirements are also included.

  11. Characterization of dynamic droplet impaction and deposit formation on leaf surfaces

    USDA-ARS?s Scientific Manuscript database

    Elucidation of droplet dynamic impaction and deposition formation on leaf surfaces would assist to optimize application strategies, improve biological control efficiency, and minimize pesticide waste. A custom-designed system consisting of two high-speed digital cameras and a uniform-size droplet ge...

  12. Solar Extreme Ultraviolet Rocket Telesope Spectrograph ** SERTS ** Detector and Electronics subsystems

    NASA Astrophysics Data System (ADS)

    Payne, L.; Haas, J. P.; Linard, D.; White, L.

    1997-12-01

    The Laboratory for Astronomy and Solar Physics at Goddard Space Flight Center uses a variety imaging sensors for its instrumentation programs. This paper describes the detector system for SERTS. The SERTS rocket telescope uses an open faceplate, single plate MCP tube as the primary detector for EUV spectra from the Sun. The optical output of this detector is fiber-optically coupled to a cooled, large format CCD. This CCD is operated using a software controlled Camera controller based upon a design used for the SOHO/CDS mission. This camera is a general purpose design, with a topology that supports multiple types of imaging devices. Multiport devices (up to 4 ports) and multiphase clocks are supportable as well as variable speed operation. Clock speeds from 100KHz to 1MHz have been used, and the topology is currently being extended to support 10MHz operation. The form factor for the camera system is based on the popular VME buss. Because the tube is an open faceplate design, the detector system has an assortment of vacuum doors and plumbing to allow operation in vacuum but provide for safe storage at normal atmosphere. Vac-ion pumps (3) are used to maintain working vacuum at all times. Marshall Space Flight Center provided the SERTS programs with HVPS units for both the vac-ion pumps and the MCP tube. The MCP tube HVPS is a direct derivative of the design used for the SXI mission for NOAA. Auxiliary equipment includes a frame buffer that works either as a multi-frame storage unit or as a photon counting accumulation unit. This unit also performs interface buffering so that the camera may appear as a piece of GPIB instrumentation.

  13. SUBSA and PFMI Transparent Furnace Systems Currently in use in the International Space Station Microgravity Science Glovebox

    NASA Technical Reports Server (NTRS)

    Spivey, Reggie A.; Gilley, Scott; Ostrogorsky, Aleksander; Grugel, Richard; Smith, Guy; Luz, Paul

    2003-01-01

    The Solidification Using a Baffle in Sealed Ampoules (SUBSA) and Pore Formation and Mobility Investigation (PFMI) furnaces were developed for operation in the International Space Station (ISS) Microgravity Science Glovebox (MSG). Both furnaces were launched to the ISS on STS-111, June 4, 2002, and are currently in use on orbit. The SUBSA furnace provides a maximum temperature of 850 C and can accommodate a metal sample as large as 30 cm long and 12mm in diameter. SUBSA utilizes a gradient freeze process with a minimum cooldown rate of 0.5C per min, and a stability of +/- 0.15C. An 8 cm long transparent gradient zone coupled with a Cohu 3812 camera and quartz ampoule allows for observation and video recording of the solidification process. PFMI is a Bridgman type furnace that operates at a maximum temperature of 130C and can accommodate a sample 23cm long and 10mm in diameter. Two Cohu 3812 cameras mounted 90 deg apart move on a separate translation system which allows for viewing of the sample in the transparent hot zone and gradient zone independent of the furnace translation rate and direction. Translation rates for both the cameras and furnace can be specified from 0.5micrometers/sec to 100 micrometers/sec with a stability of +/-5%. The two furnaces share a Process Control Module (PCM) which controls the furnace hardware, a Data Acquisition Pad (DaqPad) which provides signal condition of thermal couple data, and two Cohu 3812 cameras. The hardware and software allow for real time monitoring and commanding of critical process control parameters. This paper will provide a detailed explanation of the SUBSA and PFMI systems along with performance data and some preliminary results from completed on-orbit processing runs.

  14. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  15. ARNICA, the NICMOS 3 imaging camera of TIRGO.

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 μm that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1″per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 μm side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.

  16. High performance digital read out integrated circuit (DROIC) for infrared imaging

    NASA Astrophysics Data System (ADS)

    Mizuno, Genki; Olah, Robert; Oduor, Patrick; Dutta, Achyut K.; Dhar, Nibir K.

    2016-05-01

    Banpil Photonics has developed a high-performance Digital Read-Out Integrated Circuit (DROIC) for image sensors and camera systems targeting various military, industrial and commercial Infrared (IR) imaging applications. The on-chip digitization of the pixel output eliminates the necessity for an external analog-to-digital converter (ADC), which not only cuts costs, but also enables miniaturization of packaging to achieve SWaP-C camera systems. In addition, the DROIC offers new opportunities for greater on-chip processing intelligence that are not possible in conventional analog ROICs prevalent today. Conventional ROICs, which typically can enhance only one high performance attribute such as frame rate, power consumption or noise level, fail when simultaneously targeting the most aggressive performance requirements demanded in imaging applications today. Additionally, scaling analog readout circuits to meet such requirements leads to expensive, high-power consumption with large and complex systems that are untenable in the trend towards SWaP-C. We present the implementation of a VGA format (640x512 pixels 15μm pitch) capacitivetransimpedance amplifier (CTIA) DROIC architecture that incorporates a 12-bit ADC at the pixel level. The CTIA pixel input circuitry has two gain modes with programmable full-well capacity values of 100K e- and 500K e-. The DROIC has been developed with a system-on-chip architecture in mind, where all the timing and biasing are generated internally without requiring any critical external inputs. The chip is configurable with many parameters programmable through a serial programmable interface (SPI). It features a global shutter, low power, and high frame rates programmable from 30 up 500 frames per second in full VGA format supported through 24 LVDS outputs. This DROIC, suitable for hybridization with focal plane arrays (FPA) is ideal for high-performance uncooled camera applications ranging from near IR (NIR) and shortwave IR (SWIR) to mid-wave IR (MWIR) and long-wave IR (LWIR) spectral bands.

  17. Camera Systems Rapidly Scan Large Structures

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  18. Development and calibration of a new gamma camera detector using large square Photomultiplier Tubes

    NASA Astrophysics Data System (ADS)

    Zeraatkar, N.; Sajedi, S.; Teimourian Fard, B.; Kaviani, S.; Akbarzadeh, A.; Farahani, M. H.; Sarkar, S.; Ay, M. R.

    2017-09-01

    Large area scintillation detectors applied in gamma cameras as well as Single Photon Computed Tomography (SPECT) systems, have a major role in in-vivo functional imaging. Most of the gamma detectors utilize hexagonal arrangement of Photomultiplier Tubes (PMTs). In this work we applied large square-shaped PMTs with row/column arrangement and positioning. The Use of large square PMTs reduces dead zones in the detector surface. However, the conventional center of gravity method for positioning may not introduce an acceptable result. Hence, the digital correlated signal enhancement (CSE) algorithm was optimized to obtain better linearity and spatial resolution in the developed detector. The performance of the developed detector was evaluated based on NEMA-NU1-2007 standard. The acquired images using this method showed acceptable uniformity and linearity comparing to three commercial gamma cameras. Also the intrinsic and extrinsic spatial resolutions with low-energy high-resolution (LEHR) collimator at 10 cm from surface of the detector were 3.7 mm and 7.5 mm, respectively. The energy resolution of the camera was measured 9.5%. The performance evaluation demonstrated that the developed detector maintains image quality with a reduced number of used PMTs relative to the detection area.

  19. Application of imaging to the atmospheric Cherenkov technique

    NASA Technical Reports Server (NTRS)

    Cawley, M. F.; Fegan, D. J.; Gibbs, K.; Gorham, P. W.; Hillas, A. M.; Lamb, R. C.; Liebing, D. F.; Mackeown, P. K.; Porter, N. A.; Stenger, V. J.

    1985-01-01

    Turver and Weekes proposed using a system of phototubes in the focal plane of a large reflector to give an air Cherenkov camera for gamma ray astronomy. Preliminary results with a 19 element camera have been reported previously. In 1983 the camera was increased to 37 pixels; it has now been routinely operated for two years. A brief physical description of the camera, its mode of operation, and the data reduction procedures are presented. The Monte Carlo simultations on which these are based on also reviewed.

  20. Studying medical communication with video vignettes: a randomized study on how variations in video-vignette introduction format and camera focus influence analogue patients' engagement.

    PubMed

    Visser, Leonie N C; Bol, Nadine; Hillen, Marij A; Verdam, Mathilde G E; de Haes, Hanneke C J M; van Weert, Julia C M; Smets, Ellen M A

    2018-01-19

    Video vignettes are used to test the effects of physicians' communication on patient outcomes. Methodological choices in video-vignette development may have far-stretching consequences for participants' engagement with the video, and thus the ecological validity of this design. To supplement the scant evidence in this field, this study tested how variations in video-vignette introduction format and camera focus influence participants' engagement with a video vignette showing a bad news consultation. Introduction format (A = audiovisual vs. B = written) and camera focus (1 = the physician only, 2 = the physician and the patient at neutral moments alternately, 3 = the physician and the patient at emotional moments alternately) were varied in a randomized 2 × 3 between-subjects design. One hundred eighty-one students were randomly assigned to watch one of the six resulting video-vignette conditions as so-called analogue patients, i.e., they were instructed to imagine themselves being in the video patient's situation. Four dimensions of self-reported engagement were assessed retrospectively. Emotional engagement was additionally measured by recording participants' electrodermal and cardiovascular activity continuously while watching. Analyses of variance were used to test the effects of introduction format, camera focus and their interaction. The audiovisual introduction induced a stronger blood pressure response during watching the introduction (p = 0.048, [Formula: see text]= 0.05) and the consultation part of the vignette (p = 0.051, [Formula: see text]= 0.05), when compared to the written introduction. With respect to camera focus, results revealed that the variant focusing on the patient at emotional moments evoked a higher level of electrodermal activity (p = 0.003, [Formula: see text]= 0.06), when compared to the other two variants. Furthermore, an interaction effect was shown on self-reported emotional engagement (p = 0.045, [Formula: see text]= 0.04): the physician-only variant resulted in lower emotional engagement if the vignette was preceded by the audiovisual introduction. No effects were shown on the other dimensions of self-reported engagement. Our findings imply that using an audiovisual introduction combined with alternating camera focus depicting patient's emotions results in the highest levels of emotional engagement in analogue patients. This evidence can inform methodological decisions during the development of video vignettes, and thereby enhance the ecological validity of future video-vignettes studies.

  1. A Taxonomy of Asynchronous Instructional Video Styles

    ERIC Educational Resources Information Center

    Chorianopoulos, Konstantinos

    2018-01-01

    Many educational organizations are employing instructional videos in their pedagogy, but there is a limited understanding of the possible video formats. In practice, the presentation format of instructional videos ranges from direct recording of classroom teaching with a stationary camera, or screencasts with voiceover, to highly elaborate video…

  2. Cloud formation over Western Atlantic Ocean north of South America

    NASA Image and Video Library

    1962-10-03

    S62-06606 (3 Oct. 1962) --- Cloud formation over Western Atlantic Ocean north of South America taken during the fourth orbit pass of the Mercury-Atlas 8 (MA-8) mission by astronaut Walter M. Schirra Jr. with a hand-held camera. Photo credit: NASA

  3. Thematic Conference on Remote Sensing for Exploration Geology, 6th, Houston, TX, May 16-19, 1988, Proceedings. Volumes 1 & 2

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Papers concerning remote sensing applications for exploration geology are presented, covering topics such as remote sensing technology, data availability, frontier exploration, and exploration in mature basins. Other topics include offshore applications, geobotany, mineral exploration, engineering and environmental applications, image processing, and prospects for future developments in remote sensing for exploration geology. Consideration is given to the use of data from Landsat, MSS, TM, SAR, short wavelength IR, the Geophysical Environmental Research Airborne Scanner, gas chromatography, sonar imaging, the Airborne Visible-IR Imaging Spectrometer, field spectrometry, airborne thermal IR scanners, SPOT, AVHRR, SIR, the Large Format camera, and multitimephase satellite photographs.

  4. Collapse Pits

    NASA Technical Reports Server (NTRS)

    2005-01-01

    24 April 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a large and several small pits formed by collapse along the trend of a fault system in the Uranius Fossae region of Mars. Running diagonal from middle-right toward lower left is a trough that intersects the pit. The trough is a typical graben formed by faulting as the upper crust of Mars split and pulled apart at this location. The opening of the graben also led to formation of the collapse pits.

    Location near: 26.2oN, 88.7oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern Summer

  5. Low-altitude photographic transects of the Arctic Network of National Park Units and Selawik National Wildlife Refuge, Alaska, July 2013

    USGS Publications Warehouse

    Marcot, Bruce G.; Jorgenson, M. Torre; DeGange, Anthony R.

    2014-01-01

    5. A Canon® Rebel 3Ti with a Sigma zoom lens (18–200 mm focal length). The Drift® HD-170 and GoPro® Hero3 cameras were secured to the struts and underwing for nadir (direct downward) imaging. The Panasonic® and Canon® cameras were each hand-held for oblique-angle landscape images, shooting through the airplanes’ windows, targeting both general landscape conditions as well as landscape features of special interest, such as tundra fire scars and landslips. The Drift® and GoPro® cameras each were set for time-lapse photography at 5-second intervals for overlapping coverage. Photographs from all cameras (100 percent .jpg format) were date- and time-synchronized to geographic positioning system waypoints taken during the flights, also at 5-second intervals, providing precise geotagging (latitude-longitude) of all files. All photographs were adjusted for color saturation and gamma, and nadir photographs were corrected for lens distortion for the Drift® and GoPro® cameras’ 170° wide-angle distortion. EXIF (exchangeable image file format) data on camera settings and geotagging were extracted into spreadsheet databases. An additional 1 hour, 20 minutes, and 43 seconds of high-resolution videos were recorded at 60 frames per second with the GoPro® camera along selected transect segments, and also were image-adjusted and corrected for lens distortion. Geotagged locations of 12,395 nadir photographs from the Drift® and GoPro® cameras were overlayed in a geographic information system (ArcMap 10.0) onto a map of 44 ecotypes (land- and water-cover types) of the Arctic Network study area. Presence and area of each ecotype occurring within a geographic information system window centered on the location of each photograph were recorded and included in the spreadsheet databases. All original and adjusted photographs, videos, geographic positioning system flight tracks, and photograph databases are available by contacting ascweb@usgs.gov.

  6. Grain-Scale Analyses of Curiosity Data at Marias Pass, Gale Crater, Mars: Methods Comparison and Depositional Interpretation

    NASA Astrophysics Data System (ADS)

    Sacks, L. E.; Edgar, L. A.; Edwards, C. S.; Anderson, R. B.

    2016-12-01

    Images acquired by the Mars Hand Lens Imager (MAHLI) and the ChemCam Remote Micro Imager (RMI) onboard the Mars Science Laboratory (MSL) Curiosity rover provide grain-scale data that are critical for interpreting sedimentary deposits. At the location informally known as Marias Pass, Curiosity used both cameras to image the nine rock targets used in this study. We used manual point-counts to measure grain size distributions from those images to compare the abilities of the two cameras. The manually derived results were compared to automated grain size data obtained using pyDGS (Digital Grain Size), an open-source python program. Grain size analyses were used to test the lacustrine and aeolian depositional hypotheses for the Murray and Stimson formations at Marias Pass. Results indicate that the MAHLI and RMI instruments, despite their different fields of view and properties, provide comparable grain size measurements. Additionally, pyDGS does not account for grains smaller than a few pixels and thus does not report representative grain size data and should not be used on images with a large fraction of unresolved grains. Finally, the data collected at Marias Pass are consistent with the existing interpretations of the Murray and Stimson formations. The fine-grained results of the Murray formation analyses support lacustrine deposition, while the mean grain size of the Stimson formation is fine to medium sized sand, consistent with aeolian deposition. However, directly above the contact with the Murray formation, larger rip-up clasts of the Murray formation are present in the Stimson formation. It is possible that water was involved at this stage of erosion and re-deposition, prior to aeolian deposition. Additionally, the grain-scale analyses conducted in this study show that the Dust Removal Tool on Curiosity should be used prior to capturing images for grain-scale analysis. Two images of the target informally named Ronan, taken before and after brushing, resulted in dramatically different grain size results, suggesting that the common, thin layer of dust obscured the true grain size distribution. These grain-scale analyses at Marias Pass have important implications for the collection and processing of image data, as well as the depositional environments recorded in Gale crater. Funded by NSF Grant AST-1461200

  7. Infrared-enhanced TV for fire detection

    NASA Technical Reports Server (NTRS)

    Hall, J. R.

    1978-01-01

    Closed-circuit television is superior to conventional smoke or heat sensors for detecting fires in large open spaces. Single TV camera scans entire area, whereas many conventional sensors and maze of interconnecting wiring might be required to get same coverage. Camera is monitored by person who would trip alarm if fire were detected, or electronic circuitry could process camera signal for fully-automatic alarm system.

  8. Development of compact Compton camera for 3D image reconstruction of radioactive contamination

    NASA Astrophysics Data System (ADS)

    Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.

    2017-11-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.

  9. Robust estimation of simulated urinary volume from camera images under bathroom illumination.

    PubMed

    Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji

    2016-08-01

    General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.

  10. Invited Article: Quantitative imaging of explosions with high-speed cameras

    DOE PAGES

    McNesby, Kevin L.; Homan, Barrie E.; Benjamin, Richard A.; ...

    2016-05-31

    Here, the techniques presented in this paper allow for mapping of temperature, pressure, chemical species, and energy deposition during and following detonations of explosives, using high speed cameras as the main diagnostic tool. Additionally, this work provides measurement in the explosive near to far-field (0-500 charge diameters) of surface temperatures, peak air-shock pressures, some chemical species signatures, shock energy deposition, and air shock formation.

  11. Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications

    DTIC Science & Technology

    2013-06-01

    Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable

  12. Recent developments in space shuttle remote sensing, using hand-held film cameras

    NASA Technical Reports Server (NTRS)

    Amsbury, David L.; Bremer, Jeffrey M.

    1992-01-01

    The authors report on the advantages and disadvantages of a number of camera systems which are currently employed for space shuttle remote sensing operations. Systems discussed include the modified Hasselbad, the Rolleiflex 6008, the Linkof 5-inch format system, and the Nikon F3/F4 systems. Film/filter combinations (color positive films, color infrared films, color negative films and polarization filters) are presented.

  13. Improved TDEM formation using fused ladar/digital imagery from a low-cost small UAV

    NASA Astrophysics Data System (ADS)

    Khatiwada, Bikalpa; Budge, Scott E.

    2017-05-01

    Formation of a Textured Digital Elevation Model (TDEM) has been useful in many applications in the fields of agriculture, disaster response, terrain analysis and more. Use of a low-cost small UAV system with a texel camera (fused lidar/digital imagery) can significantly reduce the cost compared to conventional aircraft-based methods. This paper reports continued work on this problem reported in a previous paper by Bybee and Budge, and reports improvements in performance. A UAV fitted with a texel camera is flown at a fixed height above the terrain and swaths of texel image data of the terrain below is taken continuously. Each texel swath has one or more lines of lidar data surrounded by a narrow strip of EO data. Texel swaths are taken such that there is some overlap from one swath to its adjacent swath. The GPS/IMU fitted on the camera also give coarse knowledge of attitude and position. Using this coarse knowledge and the information from the texel image, the error in the camera position and attitude is reduced which helps in producing an accurate TDEM. This paper reports improvements in the original work by using multiple lines of lidar data per swath. The final results are shown and analyzed for numerical accuracy.

  14. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  15. Status and performance of HST/Wide Field Camera 3

    NASA Astrophysics Data System (ADS)

    Kimble, Randy A.; MacKenty, John W.; O'Connell, Robert W.

    2006-06-01

    Wide Field Camera 3 (WFC3) is a powerful UV/visible/near-infrared camera currently in development for installation into the Hubble Space Telescope. WFC3 provides two imaging channels. The UVIS channel features a 4096 x 4096 pixel CCD focal plane covering 200 to 1000 nm wavelengths with a 160 x 160 arcsec field of view. The UVIS channel provides unprecedented sensitivity and field of view in the near ultraviolet for HST. It is particularly well suited for studies of the star formation history of local galaxies and clusters, searches for Lyman alpha dropouts at moderate redshift, and searches for low surface brightness structures against the dark UV sky background. The IR channel features a 1024 x 1024 pixel HgCdTe focal plane covering 800 to 1700 nm with a 139 x 123 arcsec field of view, providing a major advance in IR survey efficiency for HST. IR channel science goals include studies of dark energy, galaxy formation at high redshift, and star formation. The instrument is being prepared for launch as part of HST Servicing Mission 4, tentatively scheduled for late 2007, contingent upon formal approval of shuttle-based servicing after successful shuttle return-to-flight. We report here on the status and performance of WFC3.

  16. A Review of Some Superconducting Technologies for AtLAST: Parametric Amplifiers, Kinetic Inductance Detectors, and On-Chip Spectrometers

    NASA Astrophysics Data System (ADS)

    Noroozian, Omid

    2018-01-01

    The current state of the art for some superconducting technologies will be reviewed in the context of a future single-dish submillimeter telescope called AtLAST. The technologies reviews include: 1) Kinetic Inductance Detectors (KIDs), which have now been demonstrated in large-format kilo-pixel arrays with photon background-limited sensitivity suitable for large field of view cameras for wide-field imaging. 2) Parametric amplifiers - specifically the Traveling-Wave Kinetic Inductance (TKIP) amplifier - which has enormous potential to increase sensitivity, bandwidth, and mapping speed of heterodyne receivers, and 3) On-chip spectrometers, which combined with sensitive direct detectors such as KIDs or TESs could be used as Multi-Object Spectrometers on the AtLAST focal plane, and could provide low-medium resolution spectroscopy of 100 objects at a time in each field of view.

  17. Legacy Extragalactic UV Survey (LEGUS) With the Hubble Space Telescope. I. Survey Description

    NASA Astrophysics Data System (ADS)

    Calzetti, D.; Lee, J. C.; Sabbi, E.; Adamo, A.; Smith, L. J.; Andrews, J. E.; Ubeda, L.; Bright, S. N.; Thilker, D.; Aloisi, A.; Brown, T. M.; Chandar, R.; Christian, C.; Cignoni, M.; Clayton, G. C.; da Silva, R.; de Mink, S. E.; Dobbs, C.; Elmegreen, B. G.; Elmegreen, D. M.; Evans, A. S.; Fumagalli, M.; Gallagher, J. S., III; Gouliermis, D. A.; Grebel, E. K.; Herrero, A.; Hunter, D. A.; Johnson, K. E.; Kennicutt, R. C.; Kim, H.; Krumholz, M. R.; Lennon, D.; Levay, K.; Martin, C.; Nair, P.; Nota, A.; Östlin, G.; Pellerin, A.; Prieto, J.; Regan, M. W.; Ryon, J. E.; Schaerer, D.; Schiminovich, D.; Tosi, M.; Van Dyk, S. D.; Walterbos, R.; Whitmore, B. C.; Wofford, A.

    2015-02-01

    The Legacy ExtraGalactic UV Survey (LEGUS) is a Cycle 21 Treasury program on the Hubble Space Telescope aimed at the investigation of star formation and its relation with galactic environment in nearby galaxies, from the scales of individual stars to those of ˜kiloparsec-size clustered structures. Five-band imaging from the near-ultraviolet to the I band with the Wide-Field Camera 3 (WFC3), plus parallel optical imaging with the Advanced Camera for Surveys (ACS), is being collected for selected pointings of 50 galaxies within the local 12 Mpc. The filters used for the observations with the WFC3 are F275W(λ2704 Å), F336W(λ3355 Å), F438W(λ4325 Å), F555W(λ5308 Å), and F814W(λ8024 Å) the parallel observations with the ACS use the filters F435W(λ4328 Å), F606W(λ5921 Å), and F814W(λ8057 Å). The multiband images are yielding accurate recent (≲50 Myr) star formation histories from resolved massive stars and the extinction-corrected ages and masses of star clusters and associations. The extensive inventories of massive stars and clustered systems will be used to investigate the spatial and temporal evolution of star formation within galaxies. This will, in turn, inform theories of galaxy evolution and improve the understanding of the physical underpinning of the gas-star formation relation and the nature of star formation at high redshift. This paper describes the survey, its goals and observational strategy, and the initial scientific results. Because LEGUS will provide a reference survey and a foundation for future observations with the James Webb Space Telescope and with ALMA, a large number of data products are planned for delivery to the community. Based on observations obtained with the NASA/ESA Hubble Space Telescope at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy under NASA Contract NAS 5-26555.

  18. Observation of Planetary Motion Using a Digital Camera

    ERIC Educational Resources Information Center

    Meyn, Jan-Peter

    2008-01-01

    A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…

  19. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †

    PubMed Central

    Lee, Sukhan

    2018-01-01

    The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506

  20. Near infrared observations of S 155. Evidence of induced star formation?

    NASA Astrophysics Data System (ADS)

    Hunt, L. K.; Lisi, F.; Felli, M.; Tofani, G.

    In order to investigate the possible existence of embedded objects of recent formation in the area of the Cepheus B - Sh2-155 interface, the authors have observed the region of the compact radio continuum source with the new near infrared camera ARNICA and the TIRGO telescope.

  1. A novel method to reduce time investment when processing videos from camera trap studies.

    PubMed

    Swinnen, Kristijn R R; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

    2014-01-01

    Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs.

  2. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

  3. Processing Ocean Images to Detect Large Drift Nets

    NASA Technical Reports Server (NTRS)

    Veenstra, Tim

    2009-01-01

    A computer program processes the digitized outputs of a set of downward-looking video cameras aboard an aircraft flying over the ocean. The purpose served by this software is to facilitate the detection of large drift nets that have been lost, abandoned, or jettisoned. The development of this software and of the associated imaging hardware is part of a larger effort to develop means of detecting and removing large drift nets before they cause further environmental damage to the ocean and to shores on which they sometimes impinge. The software is capable of near-realtime processing of as many as three video feeds at a rate of 30 frames per second. After a user sets the parameters of an adjustable algorithm, the software analyzes each video stream, detects any anomaly, issues a command to point a high-resolution camera toward the location of the anomaly, and, once the camera has been so aimed, issues a command to trigger the camera shutter. The resulting high-resolution image is digitized, and the resulting data are automatically uploaded to the operator s computer for analysis.

  4. The Ultracam Story

    NASA Astrophysics Data System (ADS)

    Leberl, F.; Gruber, M.; Ponticelli, M.; Wiechert, A.

    2012-07-01

    The UltraCam-project created a novel Large Format Digital Aerial Camera. It was inspired by the ISPRS Congress 2000 in Amsterdam. The search for a promising imaging idea succeeded in May 2001, defining a tiling approach with multiple lenses and multiple area CCD arrays to assemble a seamless and geometrically stable monolithic photogrammetric aerial large format image. First resources were spent on the project in September 2011. The initial UltraCam-D was announced and demonstrated in May 2003. By now the imaging principle has resulted in a 4th generation UltraCam Eagle, increasing the original swath width from 11,500 pixels to beyond 20,000. Inspired by the original imaging principle, alternatives have been investigated, and the UltraCam-G carries the swath width even further, namely to a frame image with nearly 30,000 pixels, however, with a modified tiling concept and optimized for orthophoto production. We explain the advent of digital aerial large format imaging and how it benefits from improvements in computing technology to cope with data flows at a rate of 3 Gigabits per second and a need to deal with Terabytes of imagery within a single aerial sortie. We also address the many benefits of a transition to a fully digital workflow with a paradigm shift away from minimizing a project's number of aerial photographs and towards maximizing the automation of photogrammetric workflows by means of high redundancy imaging strategies. The instant gratification from near-real-time aerial triangulations and dense image matching has led to a reassessment of the value of photogrammetric point clouds to successfully compete with direct point cloud measurements by LiDAR.

  5. Surveying the Newly Digitized Apollo Metric Images for Highland Fault Scarps on the Moon

    NASA Astrophysics Data System (ADS)

    Williams, N. R.; Pritchard, M. E.; Bell, J. F.; Watters, T. R.; Robinson, M. S.; Lawrence, S.

    2009-12-01

    The presence and distribution of thrust faults on the Moon have major implications for lunar formation and thermal evolution. For example, thermal history models for the Moon imply that most of the lunar interior was initially hot. As the Moon cooled over time, some models predict global-scale thrust faults should form as stress builds from global thermal contraction. Large-scale thrust fault scarps with lengths of hundreds of kilometers and maximum relief of up to a kilometer or more, like those on Mercury, are not found on the Moon; however, relatively small-scale linear and curvilinear lobate scarps with maximum lengths typically around 10 km have been observed in the highlands [Binder and Gunga, Icarus, v63, 1985]. These small-scale scarps are interpreted to be thrust faults formed by contractional stresses with relatively small maximum (tens of meters) displacements on the faults. These narrow, low relief landforms could only be identified in the highest resolution Lunar Orbiter and Apollo Panoramic Camera images and under the most favorable lighting conditions. To date, the global distribution and other properties of lunar lobate faults are not well understood. The recent micron-resolution scanning and digitization of the Apollo Mapping Camera (Metric) photographic negatives [Lawrence et al., NLSI Conf. #1415, 2008; http://wms.lroc.asu.edu/apollo] provides a new dataset to search for potential scarps. We examined more than 100 digitized Metric Camera image scans, and from these identified 81 images with favorable lighting (incidence angles between about 55 and 80 deg.) to manually search for features that could be potential tectonic scarps. Previous surveys based on Panoramic Camera and Lunar Orbiter images found fewer than 100 lobate scarps in the highlands; in our Apollo Metric Camera image survey, we have found additional regions with one or more previously unidentified linear and curvilinear features on the lunar surface that may represent lobate thrust fault scarps. In this presentation we review the geologic characteristics and context of these newly-identified, potentially tectonic landforms. The lengths and relief of some of these linear and curvilinear features are consistent with previously identified lobate scarps. Most of these features are in the highlands, though a few occur along the edges of mare and/or crater ejecta deposits. In many cases the resolution of the Metric Camera frames (~10 m/pix) is not adequate to unequivocally determine the origin of these features. Thus, to assess if the newly identified features have tectonic or other origins, we are examining them in higher-resolution Panoramic Camera (currently being scanned) and Lunar Reconnaissance Orbiter Camera Narrow Angle Camera images [Watters et al., this meeting, 2009].

  6. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  7. Lens and Camera Arrays for Sky Surveys and Space Surveillance

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Cox, D.; McGraw, J.; Zimmer, P.

    2016-09-01

    In recent years, a number of sky survey projects have chosen to use arrays of commercial cameras coupled with commercial photographic lenses to enable low-cost, wide-area observation. Projects such as SuperWASP, FAVOR, RAPTOR, Lotis, PANOPTES, and DragonFly rely on multiple cameras with commercial lenses to image wide areas of the sky each night. The sensors are usually commercial astronomical charge coupled devices (CCDs) or digital single reflex (DSLR) cameras, while the lenses are large-aperture, highend consumer items intended for general photography. While much of this equipment is very capable and relatively inexpensive, this approach comes with a number of significant limitations that reduce sensitivity and overall utility of the image data. The most frequently encountered limitations include lens vignetting, narrow spectral bandpass, and a relatively large point spread function. Understanding these limits helps to assess the utility of the data, and identify areas where advanced optical designs could significantly improve survey performance.

  8. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  9. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  10. Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken

    2005-01-01

    This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.

  11. Characterization of bedload transport in steep-slope streams

    NASA Astrophysics Data System (ADS)

    Mettra, F.; Heyman, J.; Ancey, C.

    2012-04-01

    Large fluctuations in the sediment transport rate are observed in rivers, particularly in mountain streams at intermediate flow rates. These fluctuations seem to be, to some degree, correlated to the formation and migration of bedforms. Today the central question is still how to understand and account for the strong bedload variability. Recent experimental studies shed new light on the processes. The objective of this presentation is to show some of our results. To understand the behavior and the origins of sediment transport rate fluctuations in the case of steep-slope streams, we conducted laboratory experiments in a 3-m long, 8-cm wide, transparent flume. The experimental parameters are the flume inclination, flow rate and sediment input rate. Well-sorted natural gravel (8.5 mm mean diameter) were used. We focused on two-dimensional flows and incipient bedforms (i.e., for flow rates just above the threshold of incipient motion). A technique based on accelerometers was developed to record every particle passing through the flume outlet (more specifically, we measured the vibrations of a metallic slab, which was impacted by the falling particles). Analysis of bedload transport rates was then possible on all time scales. Moreover, the bed and flow were monitored using 2 cameras. We computed bed elevation, water depth and erosion/deposition at high temporal and spatial rates from camera shots (one image per second during several hours or days). In our laboratory experiments, the fluctuations of the sediment rate were large even for steady flow conditions involving well-sorted particles. Time series exhibited fluctuations at all scales and displayed long range correlations with a Hurst exponent close to 0.8. The results were compared for different input solid discharges. The main bedforms observed in our flume were anti-dunes migrating upstream. Bedform formation and propagation showed intermittency with pulses (high activity) followed by long sequences of low activity. We tried to interpret our results (bedform behavior, bed scouring) in terms of sediment outflow rate.

  12. Star Formation as Seen by the Infrared Array Camera on Spitzer

    NASA Technical Reports Server (NTRS)

    Smith, Howard A.; Allen, L.; Megeath, T.; Barmby, P.; Calvet, N.; Fazio, G.; Hartmann, L.; Myers, P.; Marengo, M.; Gutermuth, R.

    2004-01-01

    The Infrared Array Camera (IRAC) onboard Spitzer has imaged regions of star formation (SF) in its four IR bands with spatial resolutions of approximately 2"/pixel. IRAC is sensitive enough to detect very faint, embedded young stars at levels of tens of Jy, and IRAC photometry can categorize their stages of development: from young protostars with infalling envelopes (Class 0/1) to stars whose infrared excesses derive from accreting circumstellar disks (Class 11) to evolved stars dominated by photospheric emission. The IRAC images also clearly reveal and help diagnose associated regions of shocked and/or PDR emission in the clouds; we find existing models provide a good start at explaining the continuum of the SF regions IRAC observes.

  13. Medium and large mammals in the Sierra La Madera, Sonora, Mexico

    Treesearch

    Erick Oswaldo Bermudez-Enriquez; Rosa Elena Jimenez-Maldonado; Gertrudis Yanes-Arvayo; Maria de la Paz Montanez-Armenta; Hugo Silva-Kurumiya

    2013-01-01

    Sierra La Madera is a Sky Island mountain range in the Madrean Archipelago. It is in Fracción V of the Ajos-Bavispe CONANP Reserve in the Municipios (= Counties) of Cumpas, Granados, Huásabas, Moctezuma, and Villa Hidalgo. Medium and large mammals were inventoried using camera traps. Eighteen Wild View 2® camera traps were deployed during four sampling periods: August...

  14. HUBBLE PROVIDES 'ONE-TWO PUNCH' TO SEE BIRTH OF STARS IN GALACTIC WRECKAGE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Two powerful cameras aboard NASA's Hubble Space Telescope teamed up to capture the final stages in the grand assembly of galaxies. The photograph, taken by the Advanced Camera for Surveys (ACS) and the revived Near Infrared Camera and Multi-Object Spectrometer (NICMOS), shows a tumultuous collision between four galaxies located 1 billion light-years from Earth. The galactic car wreck is creating a torrent of new stars. The tangled up galaxies, called IRAS 19297-0406, are crammed together in the center of the picture. IRAS 19297-0406 is part of a class of galaxies known as ultraluminous infrared galaxies (ULIRGs). ULIRGs are considered the progenitors of massive elliptical galaxies. ULIRGs glow fiercely in infrared light, appearing 100 times brighter than our Milky Way Galaxy. The large amount of dust in these galaxies produces the brilliant infrared glow. The dust is generated by a firestorm of star birth triggered by the collisions. IRAS 19297-0406 is producing about 200 new Sun-like stars every year -- about 100 times more stars than our Milky Way creates. The hotbed of this star formation is the central region [the yellow objects]. This area is swamped in the dust created by the flurry of star formation. The bright blue material surrounding the central region corresponds to the ultraviolet glow of new stars. The ultraviolet light is not obscured by dust. Astronomers believe that this area is creating fewer new stars and therefore not as much dust. The colliding system [yellow and blue regions] has a diameter of about 30,000 light-years, or about half the size of the Milky Way. The tail [faint blue material at left] extends out for another 20,000 light-years. Astronomers used both cameras to witness the flocks of new stars that are forming from the galactic wreckage. NICMOS penetrated the dusty veil that masks the intense star birth in the central region. ACS captured the visible starlight of the colliding system's blue outer region. IRAS 19297-0406 may be similar to the so-called Hickson compact groups -- clusters of at least four galaxies in a tight configuration that are isolated from other galaxies. The galaxies are so close together that they lose energy from the relentless pull of gravity. Eventually, they fall into each other and form one massive galaxy. This color-composite image was made by combining photographs taken in near-infrared light with NICMOS and ultraviolet and visible light with ACS. The pictures were taken with these filters: the H-band and J-band on NICMOS; the V-band on the ACS wide-field camera; and the U-band on the ACS high-resolution camera. The images were taken on May 13 and 14. Credits: NASA, the NICMOS Group (STScI, ESA), and the NICMOS Science Team (University of Arizona)

  15. LBT/LUCIFER view of star-forming galaxies in the cluster 7C 1756+6520 at z ˜ 1.4

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Sommariva, Veronica; Cresci, Giovanni; Sani, Eleonora; Galametz, Audrey; Mannucci, Filippo; Petropoulou, Vasiliki; Fumana, Marco

    2012-10-01

    Galaxy clusters are key places to study the contribution of nature (i.e. mass and morphology) and nurture (i.e. environment) in the formation and evolution of galaxies. Recently, a number of clusters at z > 1, i.e. corresponding to the first epochs of the cluster formation, have been discovered and confirmed spectroscopically. We present new observations obtained with the LBT Near Infrared Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER) spectrograph at Large Binocular Telescope (LBT) of a sample of star-forming galaxies associated with a large-scale structure around the radio galaxy 7C 1756+6520 at z = 1.42. Combining our spectroscopic data and the literature photometric data, we derived some of the properties of these galaxies: star formation rate, metallicity and stellar mass. With the aim of analysing the effect of the cluster environment on galaxy evolution, we have located the galaxies in the plane of the so-called fundamental metallicity relation (FMR), which is known not to evolve with redshift up to z = 2.5 for field galaxies, but it is still unexplored in rich environments at low and high redshifts. We found that the properties of the galaxies in the cluster 7C 1756+6520 are compatible with the FMR which suggests that the effect of the environment on galaxy metallicity at this early epoch of cluster formation is marginal. As a side study, we also report the spectroscopic analysis of a bright active galactic nucleus, belonging to the cluster, which shows a significant outflow of gas.

  16. Cryptography Would Reveal Alterations In Photographs

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1995-01-01

    Public-key decryption method proposed to guarantee authenticity of photographic images represented in form of digital files. In method, digital camera generates original data from image in standard public format; also produces coded signature to verify standard-format image data. Scheme also helps protect against other forms of lying, such as attaching false captions.

  17. Visualization of hump formation in high-speed gas metal arc welding

    NASA Astrophysics Data System (ADS)

    Wu, C. S.; Zhong, L. M.; Gao, J. Q.

    2009-11-01

    The hump bead is a typical weld defect observed in high-speed welding. Its occurrence limits the improvement of welding productivity. Visualization of hump formation during high-speed gas metal arc welding (GMAW) is helpful in the better understanding of the humping phenomena so that effective measures can be taken to suppress or decrease the tendency of hump formation and achieve higher productivity welding. In this study, an experimental system was developed to implement vision-based observation of the weld pool behavior during high-speed GMAW. Considering the weld pool characteristics in high-speed welding, a narrow band-pass and neutral density filter was equipped for the CCD camera, the suitable exposure time was selected and side view orientation of the CCD camera was employed. The events that took place at the rear portion of the weld pools were imaged during the welding processes with and without hump bead formation, respectively. It was found that the variation of the weld pool surface height and the solid-liquid interface at the pool trailing with time shows some useful information to judge whether the humping phenomenon occurs or not.

  18. Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan

    NASA Astrophysics Data System (ADS)

    Pichette, Julien; Charle, Wouter; Lambrechts, Andy

    2017-02-01

    Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.

  19. Registration of Large Motion Blurred Images

    DTIC Science & Technology

    2016-05-09

    in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS

  20. Featured Image: A Galaxy Plunges Into a Cluster Core

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2015-10-01

    The galaxy that takes up most of the frame in this stunning image (click for the full view!) is NGC 1427A. This is a dwarf irregular galaxy (unlike the fortuitously-located background spiral galaxy in the lower right corner of the image), and its currently in the process of plunging into the center of the Fornax galaxy cluster. Marcelo Mora (Pontifical Catholic University of Chile) and collaborators have analyzed observations of this galaxy made by both the Very Large Telescope in Chile and the Hubble Advanced Camera for Surveys, which produced the image shown here as a color composite in three channels. The team worked to characterize the clusters of star formation within NGC 1427A identifiable in the image as bright knots within the galaxy and determine how the interactions of this galaxy with its cluster environment affect the star formation within it. For more information and the original image, see the paper below.Citation:Marcelo D. Mora et al 2015 AJ 150 93. doi:10.1088/0004-6256/150/3/93

  1. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  2. Earth Observations taken by the Expedition 22 Crew

    NASA Image and Video Library

    2009-12-01

    ISS022-E-005258 (1 Dec. 2009) --- This detailed hand-held digital camera?s image recorded from the International Space Station highlights sand dunes in the Fachi-Bilma erg, or sand sea, which is part of the central eastern Tenere Desert. The Tenere occupies much of southeastern Niger and is considered to be part of the larger Sahara Desert that stretches across northern Africa. Much of the Sahara is comprised of ergs ? with an area of approximately 150,000 square kilometers, the Fachi-Bilma is one of the larger sand seas. Two major types of dunes are visible in the image. Large, roughly north-south oriented transverse dunes fill the image frame. This type of dune tends to form at roughly right angles to the dominant northeasterly winds. The dune crests are marked in this image by darker, steeper sand accumulations that cast shadows. The lighter-toned zones between are lower interdune ?flats?. The large dunes appear to be highly symmetrical with regard to their crests. This suggests that the crest sediments are coarser, preventing the formation of a steeper slip face on the downwind side of the dune by wind-driven motion of similarly-sized sand grains. According to NASA scientists, this particular form of transverse dune is known as a zibar, and is thought to form by winnowing of smaller sand grains by the wind, leaving the coarser grains to form dune crests. A second set of thin linear dunes oriented at roughly right angles to the zibar dunes appears to be formed on the larger landforms and is therefore a younger landscape feature. These dunes appear to be forming from finer grains in the same wind field as the larger zibars. The image was taken with digital still camera fitted with a 400 mm lens, and is provided by the ISS Crew Earth Observations experiment and Image Science & Analysis Laboratory, Johnson Space Center.

  3. A New Cryomacroscope Device (Type III) for Visualization of Physical Events in Cryopreservation with Applications to Vitrification and Synthetic Ice Modulators

    PubMed Central

    Rabin, Yoed; Taylor, Michael J.; Feig, Justin S. G.; Baicu, Simona; Chen, Zhen

    2013-01-01

    The objective of the current study is to develop a new cryomacroscope prototype for the study of vitrification in large-size specimens. The unique contribution in the current study is in developing a cryomacroscope setup as an add-on device to a commercial controlled-rate cooler and in demonstration of physical events in cryoprotective cocktails containing synthetic ice modulators (SIM)—compounds which hinder ice crystal growth. Cryopreservation by vitrification is a highly complex application, where the likelihood of crystallization, fracture formation, degradation of the biomaterial quality, and other physical events are dependent not only upon the instantaneous cryogenic conditions, but more significantly upon the evolution of conditions along the cryogenic protocol. Nevertheless, cryopreservation success is most frequently assessed by evaluating the cryopreserved product at its end states—either at the cryogenic storage temperature or room temperature. The cryomacroscope is the only available device for visualization of large-size specimens along the thermal protocol, in an effort to correlate the quality of the cryopreserved product with physical events. Compared with earlier cryomacroscope prototypes, the new Cryomacroscope-III evaluated here benefits from a higher resolution color camera, improved illumination, digital recording capabilities, and high repeatability in tested thermal conditions via a commercial controlled-rate cooler. A specialized software package was developed in the current study, having two modes of operation: (a) experimentation mode to control the operation of the camera, record camera frames sequentially, log thermal data from sensors, and save case-specific information; and (b) post-processing mode to generate a compact file integrating images, elapsed time, and thermal data for each experiment. The benefits of the Cryomacroscope-III are demonstrated using various tested mixtures of SIMs with the cryoprotective cocktail DP6, which were found effective in preventing ice growth, even at significantly subcritical cooling rates with reference to the pure DP6. PMID:23993920

  4. Performance assessment of a compressive sensing single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Preece, Bradley L.

    2017-04-01

    Conventional sensors measure the light incident at each pixel in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process to recover the image. CS has the potential to acquire imagery with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations, reconstruction accuracy, and reconstruction speed to meet operational requirements. Performance modeling of CS imagers is challenging due to the complexity and nonlinearity of the system and reconstruction algorithm. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of a two-handheld object target set was collected using an shortwave infrared single-pixel CS camera for various ranges and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques are discussed.

  5. Hubble Peers into the Storm

    NASA Image and Video Library

    2017-12-08

    This shot from the NASA/ESA Hubble Space Telescope shows a maelstrom of glowing gas and dark dust within one of the Milky Way’s satellite galaxies, the Large Magellanic Cloud (LMC). This stormy scene shows a stellar nursery known as N159, an HII region over 150 light-years across. N159 contains many hot young stars. These stars are emitting intense ultraviolet light, which causes nearby hydrogen gas to glow, and torrential stellar winds, which are carving out ridges, arcs, and filaments from the surrounding material. At the heart of this cosmic cloud lies the Papillon Nebula, a butterfly-shaped region of nebulosity. This small, dense object is classified as a High-Excitation Blob, and is thought to be tightly linked to the early stages of massive star formation. N159 is located over 160,000 light-years away. It resides just south of the Tarantula Nebula (heic1402), another massive star-forming complex within the LMC. This image comes from Hubble’s Advanced Camera for Surveys. The region was previously imaged by Hubble’s Wide Field Planetary Camera 2, which also resolved the Papillon Nebula for the first time. Credit: ESA/Hubble & NASA

  6. New long-zoom lens for 4K super 35mm digital cameras

    NASA Astrophysics Data System (ADS)

    Thorpe, Laurence J.; Usui, Fumiaki; Kamata, Ryuhei

    2015-05-01

    The world of television production is beginning to adopt 4K Super 35 mm (S35) image capture for a widening range of program genres that seek both the unique imaging properties of that large image format and the protection of their program assets in a world anticipating future 4K services. Documentary and natural history production in particular are transitioning to this form of production. The nature of their shooting demands long zoom lenses. In their traditional world of 2/3-inch digital HDTV cameras they have a broad choice in portable lenses - with zoom ranges as high as 40:1. In the world of Super 35mm the longest zoom lens is limited to 12:1 offering a telephoto of 400mm. Canon was requested to consider a significantly longer focal range lens while severely curtailing its size and weight. Extensive computer simulation explored countless combinations of optical and optomechanical systems in a quest to ensure that all operational requests and full 4K performance could be met. The final lens design is anticipated to have applications beyond entertainment production, including a variety of security systems.

  7. Comparative Analysis of Gene Expression for Convergent Evolution of Camera Eye Between Octopus and Human

    PubMed Central

    Ogura, Atsushi; Ikeo, Kazuho; Gojobori, Takashi

    2004-01-01

    Although the camera eye of the octopus is very similar to that of humans, phylogenetic and embryological analyses have suggested that their camera eyes have been acquired independently. It has been known as a typical example of convergent evolution. To study the molecular basis of convergent evolution of camera eyes, we conducted a comparative analysis of gene expression in octopus and human camera eyes. We sequenced 16,432 ESTs of the octopus eye, leading to 1052 nonredundant genes that have matches in the protein database. Comparing these 1052 genes with 13,303 already-known ESTs of the human eye, 729 (69.3%) genes were commonly expressed between the human and octopus eyes. On the contrary, when we compared octopus eye ESTs with human connective tissue ESTs, the expression similarity was quite low. To trace the evolutionary changes that are potentially responsible for camera eye formation, we also compared octopus-eye ESTs with the completed genome sequences of other organisms. We found that 1019 out of the 1052 genes had already existed at the common ancestor of bilateria, and 875 genes were conserved between humans and octopuses. It suggests that a larger number of conserved genes and their similar gene expression may be responsible for the convergent evolution of the camera eye. PMID:15289475

  8. A study on ice crystal formation behavior at intracellular freezing of plant cells using a high-speed camera.

    PubMed

    Ninagawa, Takako; Eguchi, Akemi; Kawamura, Yukio; Konishi, Tadashi; Narumi, Akira

    2016-08-01

    Intracellular ice crystal formation (IIF) causes several problems to cryopreservation, and it is the key to developing improved cryopreservation techniques that can ensure the long-term preservation of living tissues. Therefore, the ability to capture clear intracellular freezing images is important for understanding both the occurrence and the IIF behavior. The authors developed a new cryomicroscopic system that was equipped with a high-speed camera for this study and successfully used this to capture clearer images of the IIF process in the epidermal tissues of strawberry geranium (Saxifraga stolonifera Curtis) leaves. This system was then used to examine patterns in the location and formation of intracellular ice crystals and to evaluate the degree of cell deformation because of ice crystals inside the cell and the growing rate and grain size of intracellular ice crystals at various cooling rates. The results showed that an increase in cooling rate influenced the formation pattern of intracellular ice crystals but had less of an effect on their location. Moreover, it reduced the degree of supercooling at the onset of intracellular freezing and the degree of cell deformation; the characteristic grain size of intracellular ice crystals was also reduced, but the growing rate of intracellular ice crystals was increased. Thus, the high-speed camera images could expose these changes in IIF behaviors with an increase in the cooling rate, and these are believed to have been caused by an increase in the degree of supercooling. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Technical Communication--Taking the User into Account.

    DTIC Science & Technology

    1981-08-01

    the effect of cognitive style on instructional design, it may be more cost-effective to evaluate how different instructional formats impact different...deck, 2 SONY VCK-3210 televison cameras, and a SONY Switcher/Fader SEG-l special effects generator. One television camera was positioned next to the ...AD-A1O? 030 NEW YORK STATE COLL OF AGRICULTURE AND LIFE SCIENCES -ETC F/ S /9 TECHNICAL COMMUNICATION--TAKING THE USER INTO ACCOUNT U) AUG Al T L

  10. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  11. The optical design concept of SPICA-SAFARI

    NASA Astrophysics Data System (ADS)

    Jellema, Willem; Kruizinga, Bob; Visser, Huib; van den Dool, Teun; Pastor Santos, Carmen; Torres Redondo, Josefina; Eggens, Martin; Ferlet, Marc; Swinyard, Bruce; Dohlen, Kjetil; Griffin, Doug; Gonzalez Fernandez, Luis Miguel; Belenguer, Tomas; Matsuhara, Hideo; Kawada, Mitsunobu; Doi, Yasuo

    2012-09-01

    The Safari instrument on the Japanese SPICA mission is a zodiacal background limited imaging spectrometer offering a photometric imaging (R ≍ 2), and a low (R = 100) and medium spectral resolution (R = 2000 at 100 μm) spectroscopy mode in three photometric bands covering the 34-210 μm wavelength range. The instrument utilizes Nyquist sampled filled arrays of very sensitive TES detectors providing a 2’x2’ instantaneous field of view. The all-reflective optical system of Safari is highly modular and consists of an input optics module containing the entrance shutter, a calibration source and a pair of filter wheels, followed by an interferometer and finally the camera bay optics accommodating the focal-plane arrays. The optical design is largely driven and constrained by volume inviting for a compact three-dimensional arrangement of the interferometer and camera bay optics without compromising the optical performance requirements associated with a diffraction- and background-limited spectroscopic imaging instrument. Central to the optics we present a flexible and compact non-polarizing Mach-Zehnder interferometer layout, with dual input and output ports, employing a novel FTS scan mechanism based on magnetic bearings and a linear motor. In this paper we discuss the conceptual design of the focal-plane optics and describe how we implement the optical instrument functions, define the photometric bands, deal with straylight control, diffraction and thermal emission in the long-wavelength limit and interface to the large-format FPA arrays at one end and the SPICA telescope assembly at the other end.

  12. THE ASTRALUX LARGE M-DWARF MULTIPLICITY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janson, Markus; Hormuth, Felix; Bergfors, Carolina

    2012-07-20

    We present the results of an extensive high-resolution imaging survey of M-dwarf multiplicity using the Lucky Imaging technique. The survey made use of the AstraLux Norte camera at the Calar Alto 2.2 m telescope and the AstraLux Sur camera at the ESO New Technology Telescope in order to cover nearly the full sky. In total, 761 stars were observed (701 M-type and 60 late K-type), among which 182 new and 37 previously known companions were detected in 205 systems. Most of the targets have been observed during two or more epochs, and could be confirmed as physical companions through commonmore » proper motion, often with orbital motion being confirmed in addition. After accounting for various bias effects, we find a total M-dwarf multiplicity fraction of 27% {+-} 3% within the AstraLux detection range of 0.''08-6'' (semimajor axes of {approx}3-227 AU at a median distance of 30 pc). We examine various statistical multiplicity properties within the sample, such as the trend of multiplicity fraction with stellar mass and the semimajor axis distribution. The results indicate that M-dwarfs are largely consistent with constituting an intermediate step in a continuous distribution from higher-mass stars down to brown dwarfs. Along with other observational results in the literature, this provides further indications that stars and brown dwarfs may share a common formation mechanism, rather than being distinct populations.« less

  13. Generation of binary holograms for deep scenes captured with a camera and a depth sensor

    NASA Astrophysics Data System (ADS)

    Leportier, Thibault; Park, Min-Chul

    2017-01-01

    This work presents binary hologram generation from images of a real object acquired from a Kinect sensor. Since hologram calculation from a point-cloud or polygon model presents a heavy computational burden, we adopted a depth-layer approach to generate the holograms. This method enables us to obtain holographic data of large scenes quickly. Our investigations focus on the performance of different methods, iterative and noniterative, to convert complex holograms into binary format. Comparisons were performed to examine the reconstruction of the binary holograms at different depths. We also propose to modify the direct binary search algorithm to take into account several reference image planes. Then, deep scenes featuring multiple planes of interest can be reconstructed with better efficiency.

  14. Dust particles in controlled fusion devices: morphology, observations in the plasma and influence on the plasma performance

    NASA Astrophysics Data System (ADS)

    Rubel, M.; Cecconello, M.; Malmberg, J. A.; Sergienko, G.; Biel, W.; Drake, J. R.; Hedqvist, A.; Huber, A.; Philipps, V.

    2001-08-01

    The formation and release of particle agglomerates, i.e. debris and dusty objects, from plasma facing components and the impact of such materials on plasma operation in controlled fusion devices has been studied in the Extrap T2 reversed field pinch and the TEXTOR tokamak. Several plasma diagnostic techniques, camera observations and surface analysis methods were applied for in situ and ex situ investigation. The results are discussed in terms of processes that are decisive for dust transfer: localized power deposition connected with wall locked modes causing emission of carbon granules, brittle destruction of graphite and detachment of thick flaking co-deposited layers. The consequences for large next step devices are also addressed.

  15. Atmospheric tomography using a fringe pattern in the sodium layer.

    PubMed

    Baharav, Y; Ribak, E N; Shamir, J

    1994-02-15

    We wish to measure and separate the contribution of atmospheric turbulent layers for multiconjugate adaptive optics. To this end, we propose to create a periodic fringe pattern in the sodium layer and image it with a modified Hartmann sensor. Overlapping sections of the fringes are imaged by a lenslet array onto contiguous areas in a large-format camera. Low-layer turbulence causes an overall shift of the fringe pattern in each lenslet, and high-attitude turbulence results in internal deformations in the pattern. Parallel Fourier analysis permits separation of the atmospheric layers. Two mirrors, one conjugate to a ground layer and the other conjugate to a single high-altitude layer, are shown to widen the field of view significantly compared with existing methods.

  16. Earth observations taken during STS-41C

    NASA Image and Video Library

    2009-06-25

    41C-51-2414 (6-13 April 1984) --- The entire Texas portion of the Gulf Coast and part of Louisiana's shoreline are visible in this frame, photographed on 4"x5" roll film using a large format camera aboard the Earth-orbiting space shuttle Challenger. Coastal bays and other geographic features from the Boca Chica (mouth of Rio Grande), to the mouth of the Mississippi are included in the frame, photographed from approximately 285 nautical miles above Earth. Inland cities that can be easily delineated are San Antonio, Austin, College Station, Del Rio and Lufkin. Easily pinpointed coastal cities include Houston, Galveston and Corpus Christi. The 41-C crew members used this frame as one of the visuals for their post-flight press conference on April 24, 1984.

  17. Automatic Orientation of Large Blocks of Oblique Images

    NASA Astrophysics Data System (ADS)

    Rupnik, E.; Nex, F.; Remondino, F.

    2013-05-01

    Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III) with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right) by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.

  18. Astronaut Walz on flight deck with IMAX camera

    NASA Image and Video Library

    1996-11-04

    STS079-362-023 (16-26 Sept. 1996) --- Astronaut Carl E. Walz, mission specialist, positions the IMAX camera for a shoot on the flight deck of the Space Shuttle Atlantis. The IMAX project is a collaboration among NASA, the Smithsonian Institution's National Air and Space Museum, IMAX Systems Corporation and the Lockheed Corporation to document in motion picture format significant space activities and promote NASA's educational goals using the IMAX film medium. This system, developed by IMAX of Toronto, uses specially designed 65mm cameras and projectors to record and display very high definition color motion pictures which, accompanied by six-channel high fidelity sound, are displayed on screens in IMAX and OMNIMAX theaters that are up to ten times larger than a conventional screen, producing a feeling of "being there." The 65mm photography is transferred to 70mm motion picture films for showing in IMAX theaters. IMAX cameras have been flown on 14 previous missions.

  19. Molecular Shocks Associated with Massive Young Stars: CO Line Images with a New Far-Infrared Spectroscopic Camera on the Kuiper Airborne Observatory

    NASA Technical Reports Server (NTRS)

    Watson, Dan M.

    1997-01-01

    Under the terms of our contract with NASA Ames Research Center, the University of Rochester (UR) offers the following final technical report on grant NAG 2-958, Molecular shocks associated with massive young stars: CO line images with a new far-infrared spectroscopic camera, given for implementation of the UR Far-Infrared Spectroscopic Camera (FISC) on the Kuiper Airborne Observatory (KAO), and use of this camera for observations of star-formation regions 1. Two KAO flights in FY 1995, the final year of KAO operations, were awarded to this program, conditional upon a technical readiness confirmation which was given in January 1995. The funding period covered in this report is 1 October 1994 - 30 September 1996. The project was supported with $30,000, and no funds remained at the conclusion of the project.

  20. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  1. The First Year of Croatian Meteor Network

    NASA Astrophysics Data System (ADS)

    Andreic, Zeljko; Segon, Damir

    2010-08-01

    The idea and a short history of Croatian Meteor Network (CMN) is described. Based on use of cheap surveillance cameras, standard PC-TV cards and old PCs, the Network allows schools, amateur societies and individuals to participate in photographic meteor patrol program. The network has a strong educational component and many cameras are located at or around teaching facilities. Data obtained by these cameras are collected and processed by the scientific team of the network. Currently 14 cameras are operable, covering a large part of the croatian sky, data gathering is fully functional, and data reduction software is in testing phase.

  2. "Ipsilateral, high, single-hand, sideways"-Ruijin rule for camera assistant in uniportal video-assisted thoracoscopic surgery.

    PubMed

    Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han; Li, Hecheng

    2016-10-01

    Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as "ipsilateral, high, single-hand, sideways", which largely improves the comfort and fluency of surgery.

  3. Polarizing aperture stereoscopic cinema camera

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny

    2012-03-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.

  4. Polarizing aperture stereoscopic cinema camera

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  5. A comprehensive HST BVI catalogue of star clusters in five Hickson compact groups of galaxies

    NASA Astrophysics Data System (ADS)

    Fedotov, K.; Gallagher, S. C.; Durrell, P. R.; Bastian, N.; Konstantopoulos, I. S.; Charlton, J.; Johnson, K. E.; Chandar, R.

    2015-05-01

    We present a photometric catalogue of star cluster candidates in Hickson compact groups (HCGs) 7, 31, 42, 59, and 92, based on observations with the Advanced Camera for Surveys and the Wide Field Camera 3 on the Hubble Space Telescope. The catalogue contains precise cluster positions (right ascension and declination), magnitudes, and colours in the BVI filters. The number of detected sources ranges from 2200 to 5600 per group, from which we construct the high-confidence sample by applying a number of criteria designed to reduce foreground and background contaminants. Furthermore, the high-confidence cluster candidates for each of the 16 galaxies in our sample are split into two subpopulations: one that may contain young star clusters and one that is dominated by globular older clusters. The ratio of young star cluster to globular cluster candidates varies from group to group, from equal numbers to the extreme of HCG 31 which has a ratio of 8 to 1, due to a recent starburst induced by interactions in the group. We find that the number of blue clusters with MV < -9 correlates well with the current star formation rate in an individual galaxy, while the number of globular cluster candidates with MV < -7.8 correlates well (though with large scatter) with the stellar mass. Analyses of the high-confidence sample presented in this paper show that star clusters can be successfully used to infer the gross star formation history of the host groups and therefore determine their placement in a proposed evolutionary sequence for compact galaxy groups.

  6. The future of consumer cameras

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  7. Prototype of microbolometer thermal infrared camera for forest fire detection from space

    NASA Astrophysics Data System (ADS)

    Guerin, Francois; Dantes, Didier; Bouzou, Nathalie; Chorier, Philippe; Bouchardy, Anne-Marie; Rollin, Joël.

    2017-11-01

    The contribution of the thermal infrared (TIR) camera to the Earth observation FUEGO mission is to participate; to discriminate the clouds and smoke; to detect the false alarms of forest fires; to monitor the forest fires. Consequently, the camera needs a large dynamic range of detectable radiances. A small volume, low mass and power are required by the small FUEGO payload. These specifications can be attractive for other similar missions.

  8. The Value of Photographic Observations in Improving the Accuracy of Satellite Orbits.

    DTIC Science & Technology

    1982-02-01

    cameras in the years 1971 -3 have recently become available, particularly of the balloon-satellite Explorer 19, from the observing stations at Riga...from the Russian AFU-75 cameras in the years 1971 -1973 have recently become available, particularly of the balloon- satellite Explorer 19, from the...large numbers of observations frum the Russian AFU-75 cameras have become available, covering the years 1971 -3. The observations, made during the

  9. Ground-based remote sensing with long lens video camera for upper-stem diameter and other tree crown measurements

    Treesearch

    Neil A. Clark; Sang-Mook Lee

    2004-01-01

    This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...

  10. Thermal analysis of microlens formation on a sensitized gelatin layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muric, Branka; Pantelic, Dejan; Vasiljevic, Darko

    2009-07-01

    We analyze a mechanism of direct laser writing of microlenses. We find that thermal effects and photochemical reactions are responsible for microlens formation on a sensitized gelatin layer. An infrared camera was used to assess the temperature distribution during the microlens formation, while the diffraction pattern produced by the microlens itself was used to estimate optical properties. The study of thermal processes enabled us to establish the correlation between thermal and optical parameters.

  11. The South Pole Telescope: Unraveling the Mystery of Dark Energy

    NASA Astrophysics Data System (ADS)

    Reichardt, Christian L.; de Haan, Tijmen; Bleem, Lindsey E.

    2016-07-01

    The South Pole Telescope (SPT) is a 10-meter telescope designed to survey the millimeter-wave sky, taking advantage of the exceptional observing conditions at the Amundsen-Scott South Pole Station. The telescope and its ground-breaking 960-element bolometric camera finished surveying 2500 square degrees at 95. 150, and 220 GHz in November 2011. We have discovered hundreds of galaxy clusters in the SPT-SZ survey through the Sunyaev-Zel’dovich (SZ) effect. The formation of galaxy clusters the largest bound objects in the universe is highly sensitive to dark energy and the history of structure formation. I will discuss the cosmological constraints from the SPT-SZ galaxy cluster sample as well as future prospects with the soon to-be-installed SPT-3G camera.

  12. James Webb Space Telescope (JWST) and Star Formation

    NASA Technical Reports Server (NTRS)

    Greene, Thomas P.

    2010-01-01

    The 6.5-m aperture James Webb Space Telescope (JWST) will be a powerful tool for studying and advancing numerous areas of astrophysics. Its Fine Guidance Sensor, Near-Infrared Camera, Near-Infrared Spectrograph, and Mid-Infrared Instrument will be capable of making very sensitive, high angular resolution imaging and spectroscopic observations spanning 0.7 - 28 ?m wavelength. These capabilities are very well suited for probing the conditions of star formation in the distant and local Universe. Indeed, JWST has been designed to detect first light objects as well as to study the fine details of jets, disks, chemistry, envelopes, and the central cores of nearby protostars. We will be able to use its cameras, coronagraphs, and spectrographs (including multi-object and integral field capabilities) to study many aspects of star forming regions throughout the galaxy, the Local Group, and more distant regions. I will describe the basic JWST scientific capabilities and illustrate a few ways how they can be applied to star formation issues and conditions with a focus on Galactic regions.

  13. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  14. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  15. One-click scanning of large-size documents using mobile phone camera

    NASA Astrophysics Data System (ADS)

    Liu, Sijiang; Jiang, Bo; Yang, Yuanjie

    2016-07-01

    Currently mobile apps for document scanning do not provide convenient operations to tackle large-size documents. In this paper, we present a one-click scanning approach for large-size documents using mobile phone camera. After capturing a continuous video of documents, our approach automatically extracts several key frames by optical flow analysis. Then based on key frames, a mobile GPU based image stitching method is adopted to generate a completed document image with high details. There are no extra manual intervention in the process and experimental results show that our app performs well, showing convenience and practicability for daily life.

  16. Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong

    In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm 3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sourcesmore » using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less

  17. Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors

    DOE PAGES

    Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong; ...

    2016-02-15

    In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm 3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sourcesmore » using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less

  18. Mapping the spatial distribution of star formation in cluster galaxies at z ~ 0.5 with the Grism Lens-Amplified Survey from Space (GLASS)

    NASA Astrophysics Data System (ADS)

    Vulcani, Benedetta; Vulcani

    We present the first study of the spatial distribution of star formation in z ~ 0.5 cluster galaxies. The analysis is based on data taken with the Wide Field Camera 3 as part of the Grism Lens-Amplified Survey from Space (GLASS). We illustrate the methodology by focusing on two clusters (MACS0717.5+3745 and MACS1423.8+2404) with different morphologies (one relaxed and one merging) and use foreground and background galaxies as field control sample. The cluster+field sample consists of 42 galaxies with stellar masses in the range 108-1011 M ⊙, and star formation rates in the range 1-20 M⊙ yr -1. In both environments, Hα is more extended than the rest-frame UV continuum in 60% of the cases, consistent with diffuse star formation and inside out growth. The Hα emission appears more extended in cluster galaxies than in the field, pointing perhaps to ionized gas being stripped and/or star formation being enhanced at large radii. The peak of the Hα emission and that of the continuum are offset by less than 1 kpc. We investigate trends with the hot gas density as traced by the X-ray emission, and with the surface mass density as inferred from gravitational lens models and find no conclusive results. The diversity of morphologies and sizes observed in Hα illustrates the complexity of the environmental process that regulate star formation.

  19. Measurement of heating coil temperature for e-cigarettes with a "top-coil" clearomizer.

    PubMed

    Chen, Wenhao; Wang, Ping; Ito, Kazuhide; Fowles, Jeff; Shusterman, Dennis; Jaques, Peter A; Kumagai, Kazukiyo

    2018-01-01

    To determine the effect of applied power settings, coil wetness conditions, and e-liquid compositions on the coil heating temperature for e-cigarettes with a "top-coil" clearomizer, and to make associations of coil conditions with emission of toxic carbonyl compounds by combining results herein with the literature. The coil temperature of a second generation e-cigarette was measured at various applied power levels, coil conditions, and e-liquid compositions, including (1) measurements by thermocouple at three e-liquid fill levels (dry, wet-through-wick, and full-wet), three coil resistances (low, standard, and high), and four voltage settings (3-6 V) for multiple coils using propylene glycol (PG) as a test liquid; (2) measurements by thermocouple at additional degrees of coil wetness for a high resistance coil using PG; and (3) measurements by both thermocouple and infrared (IR) camera for high resistance coils using PG alone and a 1:1 (wt/wt) mixture of PG and glycerol (PG/GL). For single point thermocouple measurements with PG, coil temperatures ranged from 322 ‒ 1008°C, 145 ‒ 334°C, and 110 ‒ 185°C under dry, wet-through-wick, and full-wet conditions, respectively, for the total of 13 replaceable coil heads. For conditions measured with both a thermocouple and an IR camera, all thermocouple measurements were between the minimum and maximum across-coil IR camera measurements and equal to 74% ‒ 115% of the across-coil mean, depending on test conditions. The IR camera showed details of the non-uniform temperature distribution across heating coils. The large temperature variations under wet-through-wick conditions may explain the large variations in formaldehyde formation rate reported in the literature for such "top-coil" clearomizers. This study established a simple and straight-forward protocol to systematically measure e-cigarette coil heating temperature under dry, wet-through-wick, and full-wet conditions. In addition to applied power, the composition of e-liquid, and the devices' ability to efficiently deliver e-liquid to the heating coil are important product design factors effecting coil operating temperature. Precautionary temperature checks on e-cigarettes under manufacturer-recommended normal use conditions may help to reduce the health risks from exposure to toxic carbonyl emissions associated with coil overheating.

  20. Measurement of heating coil temperature for e-cigarettes with a “top-coil” clearomizer

    PubMed Central

    Wang, Ping; Ito, Kazuhide; Fowles, Jeff; Shusterman, Dennis; Jaques, Peter A.; Kumagai, Kazukiyo

    2018-01-01

    Objectives To determine the effect of applied power settings, coil wetness conditions, and e-liquid compositions on the coil heating temperature for e-cigarettes with a “top-coil” clearomizer, and to make associations of coil conditions with emission of toxic carbonyl compounds by combining results herein with the literature. Methods The coil temperature of a second generation e-cigarette was measured at various applied power levels, coil conditions, and e-liquid compositions, including (1) measurements by thermocouple at three e-liquid fill levels (dry, wet-through-wick, and full-wet), three coil resistances (low, standard, and high), and four voltage settings (3–6 V) for multiple coils using propylene glycol (PG) as a test liquid; (2) measurements by thermocouple at additional degrees of coil wetness for a high resistance coil using PG; and (3) measurements by both thermocouple and infrared (IR) camera for high resistance coils using PG alone and a 1:1 (wt/wt) mixture of PG and glycerol (PG/GL). Results For single point thermocouple measurements with PG, coil temperatures ranged from 322 ‒ 1008°C, 145 ‒ 334°C, and 110 ‒ 185°C under dry, wet-through-wick, and full-wet conditions, respectively, for the total of 13 replaceable coil heads. For conditions measured with both a thermocouple and an IR camera, all thermocouple measurements were between the minimum and maximum across-coil IR camera measurements and equal to 74% ‒ 115% of the across-coil mean, depending on test conditions. The IR camera showed details of the non-uniform temperature distribution across heating coils. The large temperature variations under wet-through-wick conditions may explain the large variations in formaldehyde formation rate reported in the literature for such “top-coil” clearomizers. Conclusions This study established a simple and straight-forward protocol to systematically measure e-cigarette coil heating temperature under dry, wet-through-wick, and full-wet conditions. In addition to applied power, the composition of e-liquid, and the devices’ ability to efficiently deliver e-liquid to the heating coil are important product design factors effecting coil operating temperature. Precautionary temperature checks on e-cigarettes under manufacturer-recommended normal use conditions may help to reduce the health risks from exposure to toxic carbonyl emissions associated with coil overheating. PMID:29672571

  1. Toward Active Control of Noise from Hot Supersonic Jets

    DTIC Science & Technology

    2012-05-14

    was developed that would allow for easy data sharing among the research teams. This format includes the acoustic data along with all calibration ...SUPERSONIC | QUARTERLY RPT. 3 ■ 1 i; ’XZ. "• Tff . w w i — r i (a) Far-Field Array Calibration (b) MHz Rate PIV Camera Setup Figure... Plenoptic camera is a similar setup to determine 3-D motion of the flow using a thick light sheet. 2.3 Update on CFD Progress In the previous interim

  2. Dither and drizzle strategies for Wide Field Camera 3

    NASA Astrophysics Data System (ADS)

    Mutchler, Max

    2010-07-01

    Hubble's 20th anniversary observation of Herbig-Haro object HH 901 in the Carina Nebula is used to illustrate observing strategies and corresponding data reduction methods for the new Wide Field Camera 3 (WFC3), which was installed during Servicing Mission 4 in May 2009. The key issues for obtaining optimal results with offline Multidrizzle processing of WFC3 data sets are presented. These pragmatic instructions in "cookbook" format are designed to help new WFC3 users quickly obtain good results with similar data sets.

  3. Post-explant visualization of thrombi in outflow grafts and their junction to a continuous-flow total artificial heart using a high-definition miniaturized camera.

    PubMed

    Karimov, Jamshid H; Horvath, David; Sunagawa, Gengo; Byram, Nicole; Moazami, Nader; Golding, Leonard A R; Fukamachi, Kiyotaka

    2015-12-01

    Post-explant evaluation of the continuous-flow total artificial heart in preclinical studies can be extremely challenging because of the device's unique architecture. Determining the exact location of tissue regeneration, neointima formation, and thrombus is particularly important. In this report, we describe our first successful experience with visualizing the Cleveland Clinic continuous-flow total artificial heart using a custom-made high-definition miniature camera.

  4. A vision-based system for measuring the displacements of large structures: Simultaneous adaptive calibration and full motion estimation

    NASA Astrophysics Data System (ADS)

    Santos, C. Almeida; Costa, C. Oliveira; Batista, J.

    2016-05-01

    The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.

  5. An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes

    PubMed Central

    Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations. PMID:22315523

  6. An efficient pipeline wavefront phase recovery for the CAFADIS camera for extremely large telescopes.

    PubMed

    Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.

  7. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  8. The ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array: camera DAQ software architecture

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Trifoglio, Massimo; Bulgarelli, Andrea; Gianotti, Fulvio; Fioretti, Valentina; Tacchini, Alessandro; Zoli, Andrea; Malaguti, Giuseppe; Capalbi, Milvia; Catalano, Osvaldo

    2014-07-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype of a Small Size dual-mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.

  9. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  10. Large Scale Variability Survey of Orion II: mapping the young, low-mass stellar populations

    NASA Astrophysics Data System (ADS)

    Briceño, C.; Calvet, N.; Hartmann, L. W.; Vivas, A. K.

    2000-12-01

    We present further results of our ongoing large scale variability survey of the Orion OB1 Association, carried out with the 8k x 8k CCD Mosaic Camera on the 1m Schmidt telescope at the Venezuela National Observatory. In an area of over 60 square degrees we have unveiled new populations of low-mass young stars over a range of environments, from the dense molecular clouds of the Orion belt region, Ori OB 1b, to areas devoid of gas in Orion OB 1a. These new young stars span ages from 1-2 Myr in Ori OB 1b to roughly 10 Myr in Ori OB 1a, a likely scenario of sequential star formation triggered by the first generation of massive stars. Proxy indicators like Hα emission and near-IR excesses show that accretion from circumstellar disks in the 10 Myr stars of Ori OB 1a has mostly stopped. This population is a numerous analog of groups like TW Hya, making it an excellent laboratory to look for debris disks and study the epoch of planet formation in sparse, non-clustered environments. Research reported herein funded by NSF grant No. 9987367, and by CONICIT and Ministerio de Ciencia y Tecnología, Venezuela.

  11. Fiber optic cable-based high-resolution, long-distance VGA extenders

    NASA Astrophysics Data System (ADS)

    Rhee, Jin-Geun; Lee, Iksoo; Kim, Heejoon; Kim, Sungjoon; Koh, Yeon-Wan; Kim, Hoik; Lim, Jiseok; Kim, Chur; Kim, Jungwon

    2013-02-01

    Remote transfer of high-resolution video information finds more applications in detached display applications for large facilities such as theaters, sports complex, airports, and security facilities. Active optical cables (AOCs) provide a promising approach for enhancing both the transmittable resolution and distance that standard copper-based cables cannot reach. In addition to the standard digital formats such as HDMI, the high-resolution, long-distance transfer of VGA format signals is important for applications where high-resolution analog video ports should be also supported, such as military/defense applications and high-resolution video camera links. In this presentation we present the development of a compressionless, high-resolution (up to WUXGA, 1920x1200), long-distance (up to 2 km) VGA extenders based on serialized technique. We employed asynchronous serial transmission and clock regeneration techniques, which enables lower cost implementation of VGA extenders by removing the necessity for clock transmission and large memory at the receiver. Two 3.125-Gbps transceivers are used in parallel to meet the required maximum video data rate of 6.25 Gbps. As the data are transmitted asynchronously, 24-bit pixel clock time stamp is employed to regenerate video pixel clock accurately at the receiver side. In parallel to the video information, stereo audio and RS-232 control signals are transmitted as well.

  12. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  13. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  14. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  15. Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0.

    PubMed

    Jin, Xin; Liu, Li; Chen, Yanqin; Dai, Qionghai

    2017-05-01

    This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object's depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.

  16. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  17. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

    PubMed Central

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.

    2014-01-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  18. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.

    PubMed

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-11-17

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.

  19. “Ipsilateral, high, single-hand, sideways”—Ruijin rule for camera assistant in uniportal video-assisted thoracoscopic surgery

    PubMed Central

    Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han

    2016-01-01

    Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as “ipsilateral, high, single-hand, sideways”, which largely improves the comfort and fluency of surgery. PMID:27867573

  20. Navigation and Remote Sensing Payloads and Methods of the Sarvant Unmanned Aerial System

    NASA Astrophysics Data System (ADS)

    Molina, P.; Fortuny, P.; Colomina, I.; Remy, M.; Macedo, K. A. C.; Zúnigo, Y. R. C.; Vaz, E.; Luebeck, D.; Moreira, J.; Blázquez, M.

    2013-08-01

    In a large number of scenarios and missions, the technical, operational and economical advantages of UAS-based photogrammetry and remote sensing over traditional airborne and satellite platforms are apparent. Airborne Synthetic Aperture Radar (SAR) or combined optical/SAR operation in remote areas might be a case of a typical "dull, dirty, dangerous" mission suitable for unmanned operation - in harsh environments such as for example rain forest areas in Brazil, topographic mapping of small to medium sparsely inhabited remote areas with UAS-based photogrammetry and remote sensing seems to be a reasonable paradigm. An example of such a system is the SARVANT platform, a fixed-wing aerial vehicle with a six-meter wingspan and a maximumtake- of-weight of 140 kilograms, able to carry a fifty-kilogram payload. SARVANT includes a multi-band (X and P) interferometric SAR payload, as the P-band enables the topographic mapping of densely tree-covered areas, providing terrain profile information. Moreover, the combination of X- and P-band measurements can be used to extract biomass estimations. Finally, long-term plan entails to incorporate surveying capabilities also at optical bands and deliver real-time imagery to a control station. This paper focuses on the remote-sensing concept in SARVANT, composed by the aforementioned SAR sensor and envisioning a double optical camera configuration to cover the visible and the near-infrared spectrum. The flexibility on the optical payload election, ranging from professional, medium-format cameras to mass-market, small-format cameras, is discussed as a driver in the SARVANT development. The paper also focuses on the navigation and orientation payloads, including the sensors (IMU and GNSS), the measurement acquisition system and the proposed navigation and orientation methods. The latter includes the Fast AT procedure, which performs close to traditional Integrated Sensor Orientation (ISO) and better than Direct Sensor Orientation (DiSO), and features the advantage of not requiring the massive image processing load for the generation of tie points, although it does require some Ground Control Points (GCPs). This technique is further supported by the availability of a high quality INS/GNSS trajectory, motivated by single-pass and repeat-pass SAR interferometry requirements.

  1. Early Science with the Large Millimeter Telescope: Detection of Dust Emission in Multiple Images of a Normal Galaxy at z > 4 Lensed by a Frontier Fields Cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Alexandra; Battisti, Andrew; Wilson, Grant W.

    We directly detect dust emission in an optically detected, multiply imaged galaxy lensed by the Frontier Fields cluster MACSJ0717.5+3745. We detect two images of the same galaxy at 1.1 mm with the AzTEC camera on the Large Millimeter Telescope leaving no ambiguity in the counterpart identification. This galaxy, MACS0717-Az9, is at z > 4 and the strong lensing model ( μ = 7.5) allows us to calculate an intrinsic IR luminosity of 9.7 × 10{sup 10} L {sub ⊙} and an obscured star formation rate of 14.6 ± 4.5 M {sub ⊙} yr{sup −1}. The unobscured star formation rate frommore » the UV is only 4.1 ± 0.3 M {sub ⊙} yr{sup −1}, which means the total star formation rate (18.7 ± 4.5 M {sub ⊙} yr{sup −1}) is dominated (75%–80%) by the obscured component. With an intrinsic stellar mass of only 6.9 × 10{sup 9} M {sub ⊙}, MACS0717-Az9 is one of only a handful of z > 4 galaxies at these lower masses that is detected in dust emission. This galaxy lies close to the estimated star formation sequence at this epoch. However, it does not lie on the dust obscuration relation (IRX- β ) for local starburst galaxies and is instead consistent with the Small Magellanic Cloud attenuation law. This remarkable lower mass galaxy, showing signs of both low metallicity and high dust content, may challenge our picture of dust production in the early universe.« less

  2. MAPPING SPATIAL/TEMPORAL DISTRIBUTIONS OF GREEN MACROALGAE IN A PACIFIC NORTHWEST COASTAL ESTUARY VIA SMALL FORMAT COLOR INFRARED AERIAL PHOTOGRAPHY

    EPA Science Inventory

    A small format 35 mm hand-held camera with color infrared slide film was used to map blooms of benthic green macroalgae upon mudflats of Yaquina Bay estuary on the central Oregon coast, U.S.A. Oblique photographs were taken during a series of low tide events, when the intertidal...

  3. Image acquisition in the Pi-of-the-Sky project

    NASA Astrophysics Data System (ADS)

    Jegier, M.; Nawrocki, K.; Poźniak, K.; Sokołowski, M.

    2006-10-01

    Modern astronomical image acquisition systems dedicated for sky surveys provide large amount of data in a single measurement session. During one session that lasts a few hours it is possible to get as much as 100 GB of data. This large amount of data needs to be transferred from camera and processed. This paper presents some aspects of image acquisition in a sky survey image acquisition system. It describes a dedicated USB linux driver for the first version of the "Pi of The Sky" CCD camera (later versions have also Ethernet interface) and the test program for the camera together with a driver-wrapper providing core device functionality. Finally, the paper contains description of an algorithm for matching several images based on image features, i.e. star positions and their brightness.

  4. Conditions that influence the accuracy of anthropometric parameter estimation for human body segments using shape-from-silhouette

    NASA Astrophysics Data System (ADS)

    Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.

    2005-01-01

    Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).

  5. Krikalev on the aft flight deck with laptop computers

    NASA Image and Video Library

    1998-12-10

    S88-E-5107 (12-11-98) --- Sergei Krikalev, mission specialist representing the Russian Space Agency (RSA), surrounded by monitors and computers on the flight deck, holds a large camera lens. The photo was taken with an electronic still camera (ESC) at 09:33:22 GMT, Dec. 11.

  6. Antennas for Terahertz Applications: Focal Plane Arrays and On-chip Non-contact Measurement Probes

    NASA Astrophysics Data System (ADS)

    Trichopoulos, Georgios C.

    The terahertz (THz) band provides unique sensing opportunities that enable several important applications such as biomedical imaging, remote non-destructive inspection of packaged goods, and security screening. THz waves can penetrate most materials and can provide unique spectral information in the 0.1--10 THz band with high resolution. In contrast, other imaging modalities, like infrared (IR), suffer from low penetration depths and are thus not attractive for non-destructive evaluation. However, state-of-the-art THz imaging systems typically employ mechanical raster scans using a single detector to acquire two-dimensional images. Such devices tend to be bulky and complicated due to the mechanical parts, and are thus rather expensive to develop and operate. Thus, large-format (e.g. 100x100 pixels) and all-electronics based THz imaging systems are badly needed to alleviate the space, weight and power (SWAP) factors and enable cost effective utilization of THz waves for sensing and high-data-rate communications. In contrast, photonic sensors are very compact because light can couple directly to the photodiode without residing to radiation coupling topologies. However, in the THz band, due to the longer wavelengths and much lower photon energies, highly efficient antennas with optimized input impedance have to be integrated with THz sensors. Here, we implement novel antenna engineering techniques that are optimized to take advantage of recent technological advances in solid-state THz sensing devices. For example, large-format focal plane arrays (FPAs) have been the Achilles' heel of THz imaging systems. Typically, optical components (lenses, mirrors) are employed in order to improve the optical performance of FPAs, however, antenna sensors suffer from degraded performance when they are far from the optical axis, thus minimizing the number of useful FPA elements. By modifying the radiation pattern of FPA antennas we manage to alleviate the off-axis aberration. Additionally, a butterfly-shaped antenna layout is introduced that enables broadband imaging. The alternative design presented here, allows for video-rate imaging in the 0.6--1.2 THz band and maintains a small antenna footprint, resulting in densely packed FPAs. In both antenna designs, we optimize the impedance matching between the antennas and the integrated electronic devices, thus achieving optimum responsivity levels for high sensitivity and low noise performance. Subsequently, we present the design details of the first THz camera and the first THz camera images captured. With the realized THz camera, imaging of concealed objects is achieved with <1mm diffraction limited spatial resolution. Moreover, motivated by the THz camera's real-time image acquisition, we developed the first camera-based THz computer tomography system that allows rapid cross-sectional imaging (˜2 min). For the design and analysis of the THz camera performance, we developed an in-house hybrid electromagnetic model, combining full-wave and high-frequency computational methods. The antenna radiation and impedance computation is first carried out using full-wave modeling of the FPA. Subsequently, we employ scalar diffraction theory to compute the field distribution at any point in space. Thus, the hybrid electromagnetic model allows fast and accurate design of THz antennas and modeling of the complete THz imaging system. Finally, motivated by the novel THz antenna layouts and the quasioptical techniques, we developed a novel non-contact probe measurement method for on-chip device characterization. In the THz regime, traditional contact probes are too small and fragile, thus inhibiting accurate and reliable circuit measurements. By integrating the device under test (DUT) with THz antennas that act as the measurement probes, we may couple the incident and reflected signal from and to the network analyzer without residing to any physical connection.

  7. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  8. High Resolution, High-Speed Photography, an Increasingly Prominent Diagnostic in Ballistic Research Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, L.; Muelder, S.

    1999-10-22

    High resolution, high-speed photography is becoming a prominent diagnostic in ballistic experimentation. The development of high speed cameras utilizing electro-optics and the use of lasers for illumination now provide the capability to routinely obtain high quality photographic records of ballistic style experiments. The purpose of this presentation is to review in a visual manner the progress of this technology and how it has impacted ballistic experimentation. Within the framework of development at LLNL, we look at the recent history of large format high-speed photography, and present a number of photographic records that represent the state of the art at themore » time they were made. These records are primarily from experiments involving shaped charges. We also present some examples of current photographic technology, developed within the ballistic community, that has application to hydro diagnostic experimentation at large. This paper is designed primarily as an oral-visual presentation. This written portion is to provide general background, a few examples, and a bibliography.« less

  9. The 1982 control network of Mars

    NASA Technical Reports Server (NTRS)

    Davies, M. E.; Katayama, F. Y.

    1983-01-01

    Attention is given to a planet-wide control network of Mars that was computed in September 1982 using a large single-block analytical triangulation with 47,524 measurements of 6853 control points on 1054 Mariner 9 and 757 Viking pictures. In all, 19,139 normal equations were solved, with a resulting standard error of measurement of 18.06 microns. The control points identified by name and letter designation are given, as are the aerographic coordinates of the control points. In addition, the coordinates of the Viking I lander site are given: latitude, 22.480 deg; longitude, 47.962 deg (radius, 3389.32 km). This study expands and updates the previously published network (1978). It is noted that the computation differs in many respects from standard aerial mapping photogrammetric practice. In comparison with aerial mapping photography, the television formats are small and the focal lengths are long; stereo coverage is rare, the scale of the pictures varies greatly, and the residual camera distortions are large.

  10. Video monitoring in the Gadria debris flow catchment: preliminary results of large scale particle image velocimetry (LSPIV)

    NASA Astrophysics Data System (ADS)

    Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo

    2015-04-01

    Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.

  11. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  12. Getting There is Half the Fun

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This map shows the Mars Exploration Rover Spirit's past and future routes across the Gusev Crater floor. The solid red line shows where the rover has traveled so far, from lander to the rim of the large crater dubbed 'Bonneville.' The dotted red line indicates proposed future paths to the Columbia Hills. Rover team members have not yet decided which direction Spirit will travel across Bonneville's ejecta (the blanket of material expelled from it during formation) and toward the hills, as illustrated by the two diverging dotted lines. Along the way, Spirit will stop to investigate interesting targets, including craters and plain deposits. The journey to the hills is estimated to about two months, or 60 sols. The underlying image in this map was taken by the camera on NASA's Mars Global Surveyor orbiter.

  13. IAE - Inflatable Antenna Experiment

    NASA Image and Video Library

    1996-05-20

    STS077-150-010 (20 May 1996) --- Soon after leaving the cargo bay of the Space Shuttle Endeavour, the Spartan 207/Inflatable Antenna Experiment (IAE) payload goes through its inflation process, backdropped over clouds. The view was photographed with a large format still camera on the first full day of in-space operations by the six-member crew. Managed by Goddard Space Flight Center (GSFC), Spartan is designed to provide short-duration, free-flight opportunities for a variety of scientific studies. The Spartan configuration on this flight is unique in that the IAE is part of an additional separate unit which is ejected once the experiment is completed. The IAE experiment will lay the groundwork for future technology development in inflatable space structures, which will be launched and then inflated like a balloon on-orbit.

  14. Avalanche photo diodes in the observatory environment: lucky imaging at 1-2.5 microns

    NASA Astrophysics Data System (ADS)

    Vaccarella, A.; Sharp, R.; Ellis, M.; Singh, S.; Bloxham, G.; Bouchez, A.; Conan, R.; Boz, R.; Bundy, D.; Davies, J.; Espeland, B.; Hart, J.; Herrald, N.; Ireland, M.; Jacoby, G.; Nielsen, J.; Vest, C.; Young, P.; Fordham, B.; Zovaro, A.

    2016-08-01

    The recent availability of large format near-infrared detectors with sub-election readout noise is revolutionizing our approach to wavefront sensing for adaptive optics. However, as with all near-infrared detector technologies, challenges exist in moving from the comfort of the laboratory test-bench into the harsh reality of the observatory environment. As part of the broader adaptive optics program for the GMT, we are developing a near-infrared Lucky Imaging camera for operational deployment at the ANU 2.3 m telescope at Siding Spring Observatory. The system provides an ideal test-bed for the rapidly evolving Selex/SAPHIRA eAPD technology while providing scientific imaging at angular resolution rivalling the Hubble Space Telescope at wavelengths λ = 1.3-2.5 μm.

  15. Earth observations taken during the STS-103 mission

    NASA Image and Video Library

    1999-12-25

    STS103-501-152 (19-27 December 1999) --- One of the astronauts aboard the Earth-orbiting Space Shuttle Discovery used a handheld large format camera to photograph this southern Florida scene. The city of Miami encroaches the eastern edge of the Everglades, which constitute an International Biosphere Reserve World Heritage Site. This subtropical wilderness encompasses a relatively flat (does not exceed 2.4 m above sea level) saw-grass marsh region of 10,000 square kilometers (4,000 square miles). According to NASA Earth scientists, the only source of water in the Everglades is from rainfall. The flow of water is detectable in this image, slowly moving from Lake Okeechobee to Florida Bay; the light blue, shallow area between the mainland and the Keys; and the southwestern Florida coast.

  16. Visualizing the history of living spaces.

    PubMed

    Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder

    2007-01-01

    The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.

  17. Students' framing of laboratory exercises using infrared cameras

    NASA Astrophysics Data System (ADS)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-12-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N =30 ) partook in four IR-camera laboratory activities, designed around the predict-observe-explain approach of White and Gunstone. The activities involved central thermal concepts that focused on heat conduction and dissipative processes such as friction and collisions. Students' interactions within each activity were videotaped and the analysis focuses on how a purposefully selected group of three students engaged with the exercises. As the basis for an interpretative study, a "thick" narrative description of the students' epistemological and conceptual framing of the exercises and how they took advantage of the disciplinary affordance of IR cameras in the thermal domain is provided. Findings include that the students largely shared their conceptual framing of the four activities, but differed among themselves in their epistemological framing, for instance, in how far they found it relevant to digress from the laboratory instructions when inquiring into thermal phenomena. In conclusion, the study unveils the disciplinary affordances of infrared cameras, in the sense of their use in providing access to knowledge about macroscopic thermal science.

  18. Study of atmospheric discharges caracteristics using with a standard video camera

    NASA Astrophysics Data System (ADS)

    Ferraz, E. C.; Saba, M. M. F.

    In this study is showed some preliminary statistics on lightning characteristics such as: flash multiplicity, number of ground contact points, formation of new and altered channels and presence of continuous current in the strokes that form the flash. The analysis is based on the images of a standard video camera (30 frames.s-1). The results obtained for some flashes will be compared to the images of a high-speed CCD camera (1000 frames.s-1). The camera observing site is located in São José dos Campos (23°S,46° W) at an altitude of 630m. This observational site has nearly 360° field of view at a height of 25m. It is possible to visualize distant thunderstorms occurring within a radius of 25km from the site. The room, situated over a metal structure, has water and power supplies, a telephone line and a small crane on the roof. KEY WORDS: Video images, Lightning, Multiplicity, Stroke.

  19. Laying the foundation to use Raspberry Pi 3 V2 camera module imagery for scientific and engineering purposes

    NASA Astrophysics Data System (ADS)

    Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James

    2017-01-01

    A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.

  20. BAE Systems' 17μm LWIR camera core for civil, commercial, and military applications

    NASA Astrophysics Data System (ADS)

    Lee, Jeffrey; Rodriguez, Christian; Blackwell, Richard

    2013-06-01

    Seventeen (17) µm pixel Long Wave Infrared (LWIR) Sensors based on vanadium oxide (VOx) micro-bolometers have been in full rate production at BAE Systems' Night Vision Sensors facility in Lexington, MA for the past five years.[1] We introduce here a commercial camera core product, the Airia-MTM imaging module, in a VGA format that reads out in 30 and 60Hz progressive modes. The camera core is architected to conserve power with all digital interfaces from the readout integrated circuit through video output. The architecture enables a variety of input/output interfaces including Camera Link, USB 2.0, micro-display drivers and optional RS-170 analog output supporting legacy systems. The modular board architecture of the electronics facilitates hardware upgrades allow us to capitalize on the latest high performance low power electronics developed for the mobile phones. Software and firmware is field upgradeable through a USB 2.0 port. The USB port also gives users access to up to 100 digitally stored (lossless) images.

  1. Queensland

    Atmospheric Science Data Center

    2013-04-16

    ... of a 157 kilometer x 210 kilometer area. The natural-color image is composed of data from the camera's red, green, and blue bands. In the ... MISR Team. Text acknowledgment: Clare Averill, David J. Diner, Graham Bothwell (Jet Propulsion Laboratory). Other formats ...

  2. Automated Meteor Detection by All-Sky Digital Camera Systems

    NASA Astrophysics Data System (ADS)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  3. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  4. On-line content creation for photo products: understanding what the user wants

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner

    2015-03-01

    This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.

  5. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  6. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  7. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  8. Visible camera cryostat design and performance for the SuMIRe Prime Focus Spectrograph (PFS)

    NASA Astrophysics Data System (ADS)

    Smee, Stephen A.; Gunn, James E.; Golebiowski, Mirek; Hope, Stephen C.; Madec, Fabrice; Gabriel, Jean-Francois; Loomis, Craig; Le fur, Arnaud; Dohlen, Kjetil; Le Mignant, David; Barkhouser, Robert; Carr, Michael; Hart, Murdock; Tamura, Naoyuki; Shimono, Atsushi; Takato, Naruhisa

    2016-08-01

    We describe the design and performance of the SuMIRe Prime Focus Spectrograph (PFS) visible camera cryostats. SuMIRe PFS is a massively multi-plexed ground-based spectrograph consisting of four identical spectrograph modules, each receiving roughly 600 fibers from a 2394 fiber robotic positioner at the prime focus. Each spectrograph module has three channels covering wavelength ranges 380 nm - 640 nm, 640 nm - 955 nm, and 955 nm - 1.26 um, with the dispersed light being imaged in each channel by a f/1.07 vacuum Schmidt camera. The cameras are very large, having a clear aperture of 300 mm at the entrance window, and a mass of 280 kg. In this paper we describe the design of the visible camera cryostats and discuss various aspects of cryostat performance.

  9. The LSST Camera 500 watt -130 degC Mixed Refrigerant Cooling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowden, Gordon B.; Langton, Brian J.; /SLAC

    2014-05-28

    The LSST Camera has a higher cryogenic heat load than previous CCD telescope cameras due to its large size (634 mm diameter focal plane, 3.2 Giga pixels) and its close coupled front-end electronics operating at low temperature inside the cryostat. Various refrigeration technologies are considered for this telescope/camera environment. MMR-Technology’s Mixed Refrigerant technology was chosen. A collaboration with that company was started in 2009. The system, based on a cluster of Joule-Thomson refrigerators running a special blend of mixed refrigerants is described. Both the advantages and problems of applying this technology to telescope camera refrigeration are discussed. Test results frommore » a prototype refrigerator running in a realistic telescope configuration are reported. Current and future stages of the development program are described. (auth)« less

  10. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J.; Rowe, R. Wanda; Zubal, I. George

    1986-01-07

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  11. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J. C.; Rowe, R. Wanda; Zubal, I. George

    1986-01-01

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  12. STS-61 air-bearing floor training in bldg 9N with Astronaut Jeff Hoffman

    NASA Image and Video Library

    1993-06-07

    Making use of the air-bearing floor in JSC's Shuttle mockup and integration laboratory, Astronaut Jeffrey A. Hoffman practices working with the Hubble Space Telescope's Wide Field/Planetary Camera (WF/PC). Changing out the large camera is one of several jobs to be performed by STS-61.

  13. Camera for detection of cosmic rays of energy more than 10 Eev on the ISS orbit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garipov, G. K.; Khrenov, B. A.; Panasyuk, M. I.

    1998-06-15

    Concept of the EHE CR observation from the ISS orbit is discussed. A design of the camera at the Russian segment of the ISS comprising a large area (60 m{sup 2}) parabolic mirror with a photo multiplier pixel retina in its focal plane is described.

  14. Effects of Camera Arrangement on Perceptual-Motor Performance in Minimally Invasive Surgery

    ERIC Educational Resources Information Center

    Delucia, Patricia R.; Griswold, John A.

    2011-01-01

    Minimally invasive surgery (MIS) is performed for a growing number of treatments. Whereas open surgery requires large incisions, MIS relies on small incisions through which instruments are inserted and tissues are visualized with a camera. MIS results in benefits for patients compared with open surgery, but degrades the surgeon's perceptual-motor…

  15. Aspheric and freeform surfaces metrology with software configurable optical test system: a computerized reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.

    2014-03-01

    A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.

  16. Three-dimensional cinematography with control object of unknown shape.

    PubMed

    Dapena, J; Harman, E A; Miller, J A

    1982-01-01

    A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.

  17. Remote gaze tracking system on a large display.

    PubMed

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-10-07

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  18. Remote Gaze Tracking System on a Large Display

    PubMed Central

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-01-01

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351

  19. Hubble Tarantula Treasury Project: Unraveling Tarantula's Web. I. Observational Overview and First Results

    NASA Technical Reports Server (NTRS)

    Sabbi, E.; Anderson, J.; Lennon, D. J.; van der Marel, R. P.; Aloisi, A.; Boyer, Martha L.; Cignoni, M.; De Marchi, G.; De Mink, S. E.; Evans, C. J.; hide

    2013-01-01

    The Hubble Tarantula Treasury Project (HTTP) is an ongoing panchromatic imaging survey of stellar populations in the Tarantula Nebula in the Large Magellanic Cloud that reaches into the sub-solar mass regime (<0.5 Stellar Mass). HTTP utilizes the capability of the Hubble Space Telescope to operate the Advanced Camera for Surveys and the Wide Field Camera 3 in parallel to study this remarkable region in the near-ultraviolet, optical, and near-infrared spectral regions, including narrow-band H(alpha) images. The combination of all these bands provides a unique multi-band view. The resulting maps of the stellar content of the Tarantula Nebula within its main body provide the basis for investigations of star formation in an environment resembling the extreme conditions found in starburst galaxies and in the early universe. Access to detailed properties of individual stars allows us to begin to reconstruct the temporal and spatial evolution of the stellar skeleton of the Tarantula Nebula over space and time on a sub-parsec scale. In this first paper we describe the observing strategy, the photometric techniques, and the upcoming data products from this survey and present preliminary results obtained from the analysis of the initial set of near-infrared observations.

  20. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    NASA Astrophysics Data System (ADS)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.

  1. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test

    PubMed Central

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-01-01

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930

  2. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  3. The star formation history in the Andromeda halo

    NASA Astrophysics Data System (ADS)

    Brown, Thomas M.

    I present the preliminary results of a program to measure the star formation history in the halo of the Andromeda galaxy. Using the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope, we obtained the deepest optical images of the sky to date, in a field on the southeast minor axis of Andromeda, 51' (11 kpc) from the nucleus. The resulting color-magnitude diagram (CMD) contains approximately 300,000 stars and extends more than 1.5 mag below the main sequence turnoff, with 50% completeness at V = 30.7 mag. We interpret this CMD using comparisons to ACS observations of five Galactic globular clusters through the same filters, and through χ2-fitting to a finely-spaced grid of calibrated stellar population models. We find evidence for a major (~30%) intermediate-age (6-8 Gyr) metal-rich ([Fe/H])>-0.5) population in the Andromeda halo, along with a significant old metal-poor population akin to that in the Milky Way halo. The large spread in ages suggests that the Andromeda halo formed as a result of a more violent merging history than that in our own Milky Way.

  4. Faint Submillimeter Galaxies Behind Lensing Clusters

    NASA Astrophysics Data System (ADS)

    Hsu, Li-Yen; Lauchlan Cowie, Lennox; Barger, Amy J.; Desai, Vandana; Murphy, Eric J.

    2017-01-01

    Faint submillimeter galaxies are the major contributors to the submillimeter extragalactic background light and hence the dominant star-forming population in the dusty universe. Determining how much these galaxies overlap the optically selected samples is critical to fully account for the cosmic star formation history. Observations of massive cluster fields are the best way to explore this faint submillimeter population, thanks to gravitational lensing effects. We have been undertaking a lensing cluster survey with the SCUBA-2 camera on the James Clerk Maxwell Telescope to map nine galaxy clusters, including the northern five clusters in the HST Frontier Fields program. We have also been using the Submillimeter Array and the Very Large Array to determine the accurate positions of our detected sources. Our observations have discovered high-redshift dusty galaxies with far-infrared luminosities similar to that of the Milky Way or luminous infrared galaxies. Some of these galaxies are still undetected in deep optical and near-infrared images. These results suggest that a substantial amount of star formation in even the faint submillimeter population may be hidden from rest-frame optical surveys.

  5. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  6. Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund

    NASA Technical Reports Server (NTRS)

    Hagyard, Mona J.

    1992-01-01

    The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.

  7. NEUTRON RADIATION DAMAGE IN CCD CAMERAS AT JOINT EUROPEAN TORUS (JET).

    PubMed

    Milocco, Alberto; Conroy, Sean; Popovichev, Sergey; Sergienko, Gennady; Huber, Alexander

    2017-10-26

    The neutron and gamma radiations in large fusion reactors are responsible for damage to charged couple device (CCD) cameras deployed for applied diagnostics. Based on the ASTM guide E722-09, the 'equivalent 1 MeV neutron fluence in silicon' was calculated for a set of CCD cameras at the Joint European Torus. Such evaluations would be useful to good practice in the operation of the video systems. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Cobra Hoods Coming At You

    NASA Image and Video Library

    2004-06-17

    This 3-D image taken by the left and right eyes of the panoramic camera on NASA Mars Exploration Rover Spirit shows the odd rock formation dubbed Cobra Hoods center. 3D glasses are necessary to view this image.

  9. Opportunity View Leaving Cape York

    NASA Image and Video Library

    2013-06-07

    NASA Mars Exploration Rover Opportunity used its navigation camera to acquire this view looking toward the southwest. The scene includes tilted rocks at the edge of a bench surrounding Cape York, with Burns formation rocks exposed in Botany Bay.

  10. Camera trap placement and the potential for bias due to trails and other features

    PubMed Central

    Forrester, Tavis D.

    2017-01-01

    Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11–33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9–38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design. PMID:29045478

  11. Camera trap placement and the potential for bias due to trails and other features.

    PubMed

    Kolowski, Joseph M; Forrester, Tavis D

    2017-01-01

    Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11-33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9-38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design.

  12. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    PubMed

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  13. A JPEG backward-compatible HDR image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  14. Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.

    2007-01-01

    The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts 2x3, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z>lO, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (<50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth- Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems.

  15. Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan F.; Barbier, L. M.; Barthelmy, S. D.; Cummings, J. R.; Fenimore, E. E.; Gehrels, N.; Hullinger, D. D.; Markwardt, C. B.; Palmer, D. M.; Parsons, A. M.; hide

    2006-01-01

    The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts 2-6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z>10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth- Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 27 microns. In addition to JWST s ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems.

  16. Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.

    2007-01-01

    The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts z>6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z>10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (<50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth- Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems.

  17. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  18. The LST scientific instruments

    NASA Technical Reports Server (NTRS)

    Levin, G. M.

    1975-01-01

    Seven scientific instruments are presently being studied for use with the Large Space Telescope (LST). These instruments are the F/24 Field Camera, the F/48-F/96 Planetary Camera, the High Resolution Spectrograph, the Faint Object Spectrograph, the Infrared Photometer, and the Astrometer. These instruments are being designed as facility instruments to be replaceable during the life of the Observatory.

  19. Absolute colorimetric characterization of a DSLR camera

    NASA Astrophysics Data System (ADS)

    Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo

    2014-03-01

    A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.

  20. Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing

    NASA Astrophysics Data System (ADS)

    Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.

    2018-01-01

    Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.

  1. Distributed Sensing and Processing for Multi-Camera Networks

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  2. History of Hubble Space Telescope (HST)

    NASA Image and Video Library

    1997-01-02

    What look like giant twisters are spotted by the Hubble Space Telescope (HST). These images are, in actuality, pillars of gases that are in the process of the formation of a new star. These pillars can be billions of miles in length and may have been forming for millions of years. This one formation is located in the Lagoon Nebula and was captured by the Hubble's wide field planetary camera-2 (WFPC-2).

  3. Real-time rendering for multiview autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.

    2006-02-01

    In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.

  4. Research on a solid state-streak camera based on an electro-optic crystal

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang

    2006-06-01

    With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.

  5. Star Formation in Taurus: Preliminary Results from 2MASS

    NASA Technical Reports Server (NTRS)

    Beichman, C. A.; Jarrett, T.

    1993-01-01

    Data with the 2MASS prototype camera were obtained in a 2.3 sq. deg region in Taurus containing Heiles Cloud 2, a region known from IRAS observations to contain a number of very young solar type stars.

  6. The Zwicky Transient Facility Camera

    NASA Astrophysics Data System (ADS)

    Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.

    2016-08-01

    The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.

  7. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  8. Novel X-ray backscatter technique for detection of dangerous materials: application to aviation and port security

    NASA Astrophysics Data System (ADS)

    Kolkoori, S.; Wrobel, N.; Osterloh, K.; Zscherpel, U.; Ewert, U.

    2013-09-01

    Radiological inspections, in general, are the nondestructive testing (NDT) methods to detect the bulk of explosives in large objects. In contrast to personal luggage, cargo or building components constitute a complexity that may significantly hinder the detection of a threat by conventional X-ray transmission radiography. In this article, a novel X-ray backscatter technique is presented for detecting suspicious objects in a densely packed large object with only a single sided access. It consists of an X-ray backscatter camera with a special twisted slit collimator for imaging backscattering objects. The new X-ray backscatter camera is not only imaging the objects based on their densities but also by including the influences of surrounding objects. This unique feature of the X-ray backscatter camera provides new insights in identifying the internal features of the inspected object. Experimental mock-ups were designed imitating containers with threats among a complex packing as they may be encountered in reality. We investigated the dependence of the quality of the X-ray backscatter image on (a) the exposure time, (b) multiple exposures, (c) the distance between object and slit camera, and (d) the width of the slit. At the end, the significant advantages of the presented X-ray backscatter camera in the context of aviation and port security are discussed.

  9. The Wide Field Imager instrument for Athena

    NASA Astrophysics Data System (ADS)

    Meidinger, Norbert; Barbera, Marco; Emberger, Valentin; Fürmetz, Maria; Manhart, Markus; Müller-Seidlitz, Johannes; Nandra, Kirpal; Plattner, Markus; Rau, Arne; Treberspurg, Wolfgang

    2017-08-01

    ESA's next large X-ray mission ATHENA is designed to address the Cosmic Vision science theme 'The Hot and Energetic Universe'. It will provide answers to the two key astrophysical questions how does ordinary matter assemble into the large-scale structures we see today and how do black holes grow and shape the Universe. The ATHENA spacecraft will be equipped with two focal plane cameras, a Wide Field Imager (WFI) and an X-ray Integral Field Unit (X-IFU). The WFI instrument is optimized for state-of-the-art resolution spectroscopy over a large field of view of 40 amin x 40 amin and high count rates up to and beyond 1 Crab source intensity. The cryogenic X-IFU camera is designed for high-spectral resolution imaging. Both cameras share alternately a mirror system based on silicon pore optics with a focal length of 12 m and large effective area of about 2 m2 at an energy of 1 keV. Although the mission is still in phase A, i.e. studying the feasibility and developing the necessary technology, the definition and development of the instrumentation made already significant progress. The herein described WFI focal plane camera covers the energy band from 0.2 keV to 15 keV with 450 μm thick fully depleted back-illuminated silicon active pixel sensors of DEPFET type. The spatial resolution will be provided by one million pixels, each with a size of 130 μm x 130 μm. The time resolution requirement for the WFI large detector array is 5 ms and for the WFI fast detector 80 μs. The large effective area of the mirror system will be completed by a high quantum efficiency above 90% for medium and higher energies. The status of the various WFI subsystems to achieve this performance will be described and recent changes will be explained here.

  10. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  11. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  12. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  13. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on any type of computer and will soon be downloadable from the net (http://rsb.info.nih.gov/ij/plugins or http://nucleartoolkit.free.fr).

  14. LMT imaging of the Extended Groth Strip: a search for the high-redshift tail of the sub-mm galaxy population

    NASA Astrophysics Data System (ADS)

    Aretxaga, Itziar

    2015-08-01

    The combination of short and long-wavelength deep (sub-)mm surveys can effectively be used to identify high-redshift sub-millimeter galaxies (z>4). Having star formation rates in excess of 500 Msun/yr, these bright (sub-)mm sources have been identified with the progenitors of massive elliptical galaxies undergoing rapid growth. With this purpose in mind, we are surveying a 20 sq. arcmin field within the Extended Groth Strip with the 1.1mm AzTEC camera mounted at the Large Millimeter Telescope that overlaps with the deep 450/850um SCUBA-2 Cosmology Legacy Survey and the CANDELS deep NIR imaging. The improved beamsize of the LMT (8”) over previous surveys aids the identification of the most prominent optical/IR counterparts. We discuss the high-redshift candidates found.

  15. A review of future remote sensing satellite capabilities

    NASA Technical Reports Server (NTRS)

    Calabrese, M. A.

    1980-01-01

    Existing, planned and future NASA capabilities in the field of remote sensing satellites are reviewed in relation to the use of remote sensing techniques for the identification of irrigated lands. The status of the currently operational Landsat 2 and 3 satellites is indicated, and it is noted that Landsat D is scheduled to be in operation in two years. The orbital configuration and instrumentation of Landsat D are discussed, with particular attention given to the thematic mapper, which is expected to improve capabilities for small field identification and crop discrimination and classification. Future possibilities are then considered, including a multi-spectral resource sampler supplying high spatial and temporal resolution data possibly based on push-broom scanning, Shuttle-maintained Landsat follow-on missions, a satellite to obtain high-resolution stereoscopic data, further satellites providing all-weather radar capability and the Large Format Camera.

  16. IAE - Inflatable Antenna Experiment

    NASA Image and Video Library

    1996-05-20

    STS077-150-044 (20 May 1996) --- Following its deployment from the Space Shuttle Endeavour, the Spartan 207/Inflatable Antenna Experiment (IAE) payload is backdropped over the Grand Canyon. After the IAE completed its inflation process in free-flight, this view was photographed with a large format still camera. The activity came on the first full day of in-space operations by the six-member crew. Managed by Goddard Space Flight Center (GSFC), Spartan is designed to provide short-duration, free-flight opportunities for a variety of scientific studies. The Spartan configuration on this flight is unique in that the IAE is part of an additional separate unit which is ejected once the experiment is completed. The IAE experiment will lay the groundwork for future technology development in inflatable space structures, which will be launched and then inflated like a balloon on-orbit.

  17. IAE - Inflatable Antenna Experiment

    NASA Image and Video Library

    1996-05-20

    STS077-150-022 (20 May 1996) --- After leaving the cargo bay of the Space Shuttle Endeavour, the Spartan 207/Inflatable Antenna Experiment (IAE) payload goes through the final stages its inflation process, backdropped over clouds and blue water. The view was photographed with a large format still camera on the first full day of in-space operations by the six-member crew. Managed by Goddard Space Flight Center (GSFC), Spartan is designed to provide short-duration, free-flight opportunities for a variety of scientific studies. The Spartan configuration on this flight is unique in that the IAE is part of an additional separate unit which is ejected once the experiment is completed. The IAE experiment will lay the groundwork for future technology development in inflatable space structures, which will be launched and then inflated like a balloon on-orbit.

  18. The Value of Change: Surprises and Insights in Stellar Evolution

    NASA Astrophysics Data System (ADS)

    Bildsten, Lars

    2018-01-01

    Astronomers with large-format cameras regularly scan the sky many times per night to detect what's changing, and telescopes in space such as Kepler and, soon, TESS obtain very accurate brightness measurements of nearly a million stars over time periods of years. These capabilities, in conjunction with theoretical and computational efforts, have yielded surprises and remarkable new insights into the internal properties of stars and how they end their lives. I will show how asteroseismology reveals the properties of the deep interiors of red giants, and highlight how astrophysical transients may be revealing unusual thermonuclear outcomes from exploding white dwarfs and the births of highly magnetic neutron stars. All the while, stellar science has been accelerated by the availability of open source tools, such as Modules for Experiments in Stellar Astrophysics (MESA), and the nearly immediate availability of observational results.

  19. Satellite land remote sensing advancements for the eighties; Proceedings of the Eighth Pecora Symposium, Sioux Falls, SD, October 4-7, 1983

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Among the topics discussed are NASA's land remote sensing plans for the 1980s, the evolution of Landsat 4 and the performance of its sensors, the Landsat 4 thematic mapper image processing system radiometric and geometric characteristics, data quality, image data radiometric analysis and spectral/stratigraphic analysis, and thematic mapper agricultural, forest resource and geological applications. Also covered are geologic applications of side-looking airborne radar, digital image processing, the large format camera, the RADARSAT program, the SPOT 1 system's program status, distribution plans, and simulation program, Space Shuttle multispectral linear array studies of the optical and biological properties of terrestrial land cover, orbital surveys of solar-stimulated luminescence, the Space Shuttle imaging radar research facility, and Space Shuttle-based polar ice sounding altimetry.

  20. Multisensor/multimission surveillance aircraft

    NASA Astrophysics Data System (ADS)

    Jobe, John T.

    1994-10-01

    The realignment of international powers, and the formation of new nations has resulted in increasing worldwide concern over border security, an expanding refugee problem, protection of fishery and mineral areas, and smuggling of all types. The focus on military services, to protect or defend against these threats of vital, national interest, is shifting to other government agencies and even commercial contractors to apply innovative and cost effective solutions. Previously, airborne surveillance and reconnaissance platforms have been large, mission dedicated military aircraft. The time has arrived for a smaller, more efficient, and more effective airborne capability. This paper briefly outlines a system of systems approach that smaller nations can afford to incorporate in their budgets, while greatly expanding their surveillance capability. The characteristics of specific cameras and sensors are purposely not addressed, so the emphasis can be placed on the integration of multiple sensors and capabilities.

  1. Viking Lander imaging investigation: Picture catalog of primary mission experiment data record

    NASA Technical Reports Server (NTRS)

    Tucker, R. B.

    1978-01-01

    All the images returned by the two Viking Landers during the primary phase of the Viking Mission are presented. Listings of supplemental information which described the conditions under which the images were acquired are included together with skyline drawings which show where the images are positioned in the field of view of the cameras. Subsets of the images are listed in a variety of sequences to aid in locating images of interest. The format and organization of the digital magnetic tape storage of the images are described. The mission and the camera system are briefly described.

  2. Very-large-area CCD image sensors: concept and cost-effective research

    NASA Astrophysics Data System (ADS)

    Bogaart, E. W.; Peters, I. M.; Kleimann, A. C.; Manoury, E. J. P.; Klaassens, W.; de Laat, W. T. F. M.; Draijer, C.; Frost, R.; Bosiers, J. T.

    2009-01-01

    A new-generation full-frame 36x48 mm2 48Mp CCD image sensor with vertical anti-blooming for professional digital still camera applications is developed by means of the so-called building block concept. The 48Mp devices are formed by stitching 1kx1k building blocks with 6.0 µm pixel pitch in 6x8 (hxv) format. This concept allows us to design four large-area (48Mp) and sixty-two basic (1Mp) devices per 6" wafer. The basic image sensor is relatively small in order to obtain data from many devices. Evaluation of the basic parameters such as the image pixel and on-chip amplifier provides us statistical data using a limited number of wafers. Whereas the large-area devices are evaluated for aspects typical to large-sensor operation and performance, such as the charge transport efficiency. Combined with the usability of multi-layer reticles, the sensor development is cost effective for prototyping. Optimisation of the sensor design and technology has resulted in a pixel charge capacity of 58 ke- and significantly reduced readout noise (12 electrons at 25 MHz pixel rate, after CDS). Hence, a dynamic range of 73 dB is obtained. Microlens and stack optimisation resulted in an excellent angular response that meets with the wide-angle photography demands.

  3. Review: comparison of PET rubidium-82 with conventional SPECT myocardial perfusion imaging

    PubMed Central

    Ghotbi, Adam A; Kjær, Andreas; Hasbak, Philip

    2014-01-01

    Nuclear cardiology has for many years been focused on gamma camera technology. With ever improving cameras and software applications, this modality has developed into an important assessment tool for ischaemic heart disease. However, the development of new perfusion tracers has been scarce. While cardiac positron emission tomography (PET) so far largely has been limited to centres with on-site cyclotron, recent developments with generator produced perfusion tracers such as rubidium-82, as well as an increasing number of PET scanners installed, may enable a larger patient flow that may supersede that of gamma camera myocardial perfusion imaging. PMID:24028171

  4. The Large Synoptic Survey Telescope (LSST) Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Ranked as the top ground-based national priority for the field for the current decade, LSST is currently under construction in Chile. The U.S. Department of Energy’s SLAC National Accelerator Laboratory is leading the construction of the LSST camera – the largest digital camera ever built for astronomy. SLAC Professor Steven M. Kahn is the overall Director of the LSST project, and SLAC personnel are also participating in the data management. The National Science Foundation is the lead agency for construction of the LSST. Additional financial support comes from the Department of Energy and private funding raised by the LSST Corporation.

  5. Mars Global Digital Dune Database; MC-1

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    The Mars Global Digital Dune Database presents data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey (USGS) Open-File Reports. The first release (Hayward and others, 2007) included dune fields from 65 degrees N to 65 degrees S (http://pubs.usgs.gov/of/2007/1158/). The current release encompasses ~ 845,000 km2 of mapped dune fields from 65 degrees N to 90 degrees N latitude. Dune fields between 65 degrees S and 90 degrees S will be released in a future USGS Open-File Report. Although we have attempted to include all dune fields, some have likely been excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore, the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS), Mars Orbiter Camera narrow angle (MOC NA), or Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images allowed, we classified dunes and included some dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. It was beyond the scope of this report to look at the detail needed to discern subtle dune modification. It was also beyond the scope of this report to measure all slipfaces. We attempted to include enough slipface measurements to represent the general circulation (as implied by gross dune morphology) and to give a sense of the complex nature of aeolian activity on Mars. The absence of slipface measurements in a given direction should not be taken as evidence that winds in that direction did not occur. When a dune field was located within a crater, the azimuth from crater centroid to dune field centroid was calculated, as another possible indicator of wind direction. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as an ArcReader project which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in an ArcMap project. The ArcMap project allows fuller use of the data, but requires ESRI ArcMap(Registered) software. A fuller description of the projects can be found in the NP_Dunes_ReadMe file (NP_Dunes_ReadMe folder_ and the NP_Dunes_ReadMe_GIS file (NP_Documentation folder). For users who prefer to create their own projects, the data are available in ESRI shapefile and geodatabase formats, as well as the open Geography Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. (NP_Documentation folder) Documentation files are available in PDF and ASCII (.txt) files. Tables are available in both Excel and ASCII (.txt)

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conder, A.; Mummolo, F. J.

    The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.

  7. 17. Machines in middle 1904 section of Building 59, Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. Machines in middle 1904 section of Building 59, Camera is pointed SW. Large wheel is part of Post Lathe, or Bull Lathe, manufactured by Oliver Machinery Co. Machine at left is a smaller Lathe made by Yates American Machinery Co., also seen in photo WA-116-A-20. - Puget Sound Naval Shipyard, Pattern Shop, Farragut Avenue, Bremerton, Kitsap County, WA

  8. Clinical trials of the prototype Rutherford Appleton Laboratory MWPC positron camera at the Royal Marsden Hospital

    NASA Astrophysics Data System (ADS)

    Flower, M. A.; Ott, R. J.; Webb, S.; Leach, M. O.; Marsden, P. K.; Clack, R.; Khan, O.; Batty, V.; McCready, V. R.; Bateman, J. E.

    1988-06-01

    Two clinical trials of the prototype RAL multiwire proportional chamber (MWPC) positron camera were carried out prior to the development of a clinical system with large-area detectors. During the first clinical trial, the patient studies included skeletal imaging using 18F, imaging of brain glucose metabolism using 18F FDG, bone marrow imaging using 52Fe citrate and thyroid imaging with Na 124I. Longitudinal tomograms were produced from the limited-angle data acquisition from the static detectors. During the second clinical trial, transaxial, coronal and sagittal images were produced from the multiview data acquisition. A more detailed thyroid study was performed in which the volume of the functioning thyroid tissue was obtained from the 3D PET image and this volume was used in estimating the radiation dose achieved during radioiodine therapy of patients with thyrotoxicosis. Despite the small field of view of the prototype camera, and the use of smaller than usual amounts of activity administered, the PET images were in most cases comparable with, and in a few cases visually better than, the equivalent planar view using a state-of-the-art gamma camera with a large field of view and routine radiopharmaceuticals.

  9. Data indicating temperature response of Ti-6Al-4V thin-walled structure during its additive manufacture via Laser Engineered Net Shaping.

    PubMed

    Marshall, Garrett J; Thompson, Scott M; Shamsaei, Nima

    2016-06-01

    An OPTOMEC Laser Engineered Net Shaping (LENS(™)) 750 system was retrofitted with a melt pool pyrometer and in-chamber infrared (IR) camera for nondestructive thermal inspection of the blown-powder, direct laser deposition (DLD) process. Data indicative of temperature and heat transfer within the melt pool and heat affected zone atop a thin-walled structure of Ti-6Al-4V during its additive manufacture are provided. Melt pool temperature data were collected via the dual-wavelength pyrometer while the dynamic, bulk part temperature distribution was collected using the IR camera. Such data are provided in Comma Separated Values (CSV) file format, containing a 752×480 matrix and a 320×240 matrix of temperatures corresponding to individual pixels of the pyrometer and IR camera, respectively. The IR camera and pyrometer temperature data are provided in blackbody-calibrated, raw forms. Provided thermal data can aid in generating and refining process-property-performance relationships between laser manufacturing and its fabricated materials.

  10. Data indicating temperature response of Ti–6Al–4V thin-walled structure during its additive manufacture via Laser Engineered Net Shaping

    PubMed Central

    Marshall, Garrett J.; Thompson, Scott M.; Shamsaei, Nima

    2016-01-01

    An OPTOMEC Laser Engineered Net Shaping (LENS™) 750 system was retrofitted with a melt pool pyrometer and in-chamber infrared (IR) camera for nondestructive thermal inspection of the blown-powder, direct laser deposition (DLD) process. Data indicative of temperature and heat transfer within the melt pool and heat affected zone atop a thin-walled structure of Ti–6Al–4V during its additive manufacture are provided. Melt pool temperature data were collected via the dual-wavelength pyrometer while the dynamic, bulk part temperature distribution was collected using the IR camera. Such data are provided in Comma Separated Values (CSV) file format, containing a 752×480 matrix and a 320×240 matrix of temperatures corresponding to individual pixels of the pyrometer and IR camera, respectively. The IR camera and pyrometer temperature data are provided in blackbody-calibrated, raw forms. Provided thermal data can aid in generating and refining process-property-performance relationships between laser manufacturing and its fabricated materials. PMID:27054180

  11. Commissioning of a new SeHCAT detector and comparison with an uncollimated gamma camera.

    PubMed

    Taylor, Jonathan C; Hillel, Philip G; Himsworth, John M

    2014-10-01

    Measurements of SeHCAT (tauroselcholic [75selenium] acid) retention have been used to diagnose bile acid malabsorption for a number of years. In current UK practice the vast majority of centres calculate uptake using an uncollimated gamma camera. Because of ever-increasing demands on gamma camera time, a new 'probe' detector was designed, assembled and commissioned. To validate the system, nine patients were scanned at day 0 and day 7 with both the new probe detector and an uncollimated gamma camera. Commissioning results were largely in line with expectations. Spatial resolution (full-width 95% of maximum) at 1 m was 36.6 cm, the background count rate was 24.7 cps and sensitivity at 1 m was 720.8 cps/MBq. The patient comparison study showed a mean absolute difference in retention measurements of 0.8% between the probe and uncollimated gamma camera, and SD of ± 1.8%. The study demonstrated that it is possible to create a simple, reproducible SeHCAT measurement system using a commercially available scintillation detector. Retention results from the probe closely agreed with those from the uncollimated gamma camera.

  12. Mechanical design for the Evryscope: a minute cadence, 10,000-square-degree FoV, gigapixel-scale telescope

    NASA Astrophysics Data System (ADS)

    Ratzloff, Jeff; Law, Nicholas M.; Fors, Octavi; Wulfken, Philip J.

    2015-01-01

    We designed, tested, prototyped and built a compact 27-camera robotic telescope that images 10,000 square degrees in 2-minute exposures. We exploit mass produced interline CCD Cameras with Rokinon consumer lenses to economically build a telescope that covers this large part of the sky simultaneously with a good enough pixel sampling to avoid the confusion limit over most of the sky. We developed the initial concept into a 3-d mechanical design with the aid of computer modeling programs. Significant design components include the camera assembly-mounting modules, the hemispherical support structure, and the instrument base structure. We simulated flexure and material stress in each of the three main components, which helped us optimize the rigidity and materials selection, while reducing weight. The camera mounts are CNC aluminum and the support shell is reinforced fiberglass. Other significant project components include optimizing camera locations, camera alignment, thermal analysis, environmental sealing, wind protection, and ease of access to internal components. The Evryscope will be assembled at UNC Chapel Hill and deployed to the CTIO in 2015.

  13. Completely optical orientation determination for an unstabilized aerial three-line camera

    NASA Astrophysics Data System (ADS)

    Wohlfeil, Jürgen

    2010-10-01

    Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.

  14. Space Science

    NASA Image and Video Library

    2002-04-07

    The Advanced Camera for Surveys (ACS), the newest camera on the Hubble Space Telescope, has captured a spectacular pair of galaxies. Located 300 million light-years away in the constellation Coma Berenices, the colliding galaxies have been nicknamed "The Mice" because of the long tails of stars and gas emanating from each galaxy. Otherwise known as NGC 4676, the pair will eventually merge into a single giant galaxy. In the galaxy at left, the bright blue patch is resolved into a vigorous cascade of clusters and associations of young, hot blue stars, whose formation has been triggered by the tidal forces of the gravitational interaction. The clumps of young stars in the long, straight tidal tail (upper right) are separated by fainter regions of material. These dim regions suggest that the clumps of stars have formed from the gravitational collapse of the gas and dust that once occupied those areas. Some of the clumps have luminous masses comparable to dwarf galaxies that orbit the halo of our own Milky Way Galaxy. Computer simulations by astronomers show that we are seeing two near identical spiral galaxies approximately 160 million years after their closest encounter. The simulations also show that the pair will eventually merge, forming a large, nearly spherical galaxy (known as an elliptical galaxy). The Mice presage what may happen to our own Milky Way several billion years from now when it collides with our nearest large neighbor, the Andromeda Galaxy (M31). This picture is assembled from three sets of images taken on April 7, 2002, in blue, orange, and near-infrared filters. Credit: NASA, H. Fort (JHU), G. Illingworth (USCS/LO), M. Clampin (STScI), G. Hartig (STScI), the ACS Science Team, and ESA.

  15. Hubble Space Telescope Image of NGC 4676, 'The Mice'

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Advanced Camera for Surveys (ACS), the newest camera on the Hubble Space Telescope, has captured a spectacular pair of galaxies. Located 300 million light-years away in the constellation Coma Berenices, the colliding galaxies have been nicknamed 'The Mice' because of the long tails of stars and gas emanating from each galaxy. Otherwise known as NGC 4676, the pair will eventually merge into a single giant galaxy. In the galaxy at left, the bright blue patch is resolved into a vigorous cascade of clusters and associations of young, hot blue stars, whose formation has been triggered by the tidal forces of the gravitational interaction. The clumps of young stars in the long, straight tidal tail (upper right) are separated by fainter regions of material. These dim regions suggest that the clumps of stars have formed from the gravitational collapse of the gas and dust that once occupied those areas. Some of the clumps have luminous masses comparable to dwarf galaxies that orbit the halo of our own Milky Way Galaxy. Computer simulations by astronomers show that we are seeing two near identical spiral galaxies approximately 160 million years after their closest encounter. The simulations also show that the pair will eventually merge, forming a large, nearly spherical galaxy (known as an elliptical galaxy). The Mice presage what may happen to our own Milky Way several billion years from now when it collides with our nearest large neighbor, the Andromeda Galaxy (M31). This picture is assembled from three sets of images taken on April 7, 2002, in blue, orange, and near-infrared filters. Credit: NASA, H. Fort (JHU), G. Illingworth (USCS/LO), M. Clampin (STScI), G. Hartig (STScI), the ACS Science Team, and ESA.

  16. The AstraLux Large M-dwarf Multiplicity Survey

    NASA Astrophysics Data System (ADS)

    Janson, Markus; Hormuth, Felix; Bergfors, Carolina; Brandner, Wolfgang; Hippler, Stefan; Daemgen, Sebastian; Kudryavtseva, Natalia; Schmalzl, Eva; Schnupp, Carolin; Henning, Thomas

    2012-07-01

    We present the results of an extensive high-resolution imaging survey of M-dwarf multiplicity using the Lucky Imaging technique. The survey made use of the AstraLux Norte camera at the Calar Alto 2.2 m telescope and the AstraLux Sur camera at the ESO New Technology Telescope in order to cover nearly the full sky. In total, 761 stars were observed (701 M-type and 60 late K-type), among which 182 new and 37 previously known companions were detected in 205 systems. Most of the targets have been observed during two or more epochs, and could be confirmed as physical companions through common proper motion, often with orbital motion being confirmed in addition. After accounting for various bias effects, we find a total M-dwarf multiplicity fraction of 27% ± 3% within the AstraLux detection range of 0farcs08-6'' (semimajor axes of ~3-227 AU at a median distance of 30 pc). We examine various statistical multiplicity properties within the sample, such as the trend of multiplicity fraction with stellar mass and the semimajor axis distribution. The results indicate that M-dwarfs are largely consistent with constituting an intermediate step in a continuous distribution from higher-mass stars down to brown dwarfs. Along with other observational results in the literature, this provides further indications that stars and brown dwarfs may share a common formation mechanism, rather than being distinct populations. Based on observations collected at the European Southern Observatory, Chile, under observing programs 081.C-0314(A), 082.C-0053(A), and 084.C-0812(A), and on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institute for Astronomy and the Instituto de Astrofísica de Andalucía (CSIC).

  17. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  18. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  19. Modification of the Miyake-Apple technique for simultaneous anterior and posterior video imaging of wet laboratory-based corneal surgery.

    PubMed

    Tan, Johnson C H; Meadows, Howard; Gupta, Aanchal; Yeung, Sonia N; Moloney, Gregory

    2014-03-01

    The aim of this study was to describe a modification of the Miyake-Apple posterior video analysis for the simultaneous visualization of the anterior and posterior corneal surfaces during wet laboratory-based deep anterior lamellar keratoplasty (DALK). A human donor corneoscleral button was affixed to a microscope slide and placed onto a custom-made mounting box. A big bubble DALK was performed on the cornea in the wet laboratory. An 11-diopter intraocular lens was positioned over the aperture of the back camera of an iPhone. This served to video record the posterior view of the corneoscleral button during the big bubble formation. An overhead operating microscope with an attached video camcorder recorded the anterior view during the surgery. The anterior and posterior views of the wet laboratory-based DALK surgery were simultaneously captured and edited using video editing software. The formation of the big bubble can be studied. This video recording camera system has the potential to act as a valuable research and teaching tool in corneal lamellar surgery, especially in the behavior of the big bubble formation in DALK.

  20. NUSC Technical Publications Guide.

    DTIC Science & Technology

    1985-05-01

    Facility personnel especially that of A. Castelluzzo, E. Deland, J. Gesel , and E. Szlosek (all of Code 4343). Reviewed and Approved: 14 July 1980 D...their technical content and format. Review and approve the manual outline, the review manuscript, and the final camera - reproducible copy. Conduct in

  1. Studies on the formation, temporal evolution and forensic applications of camera "fingerprints".

    PubMed

    Kuppuswamy, R

    2006-06-02

    A series of experiments was conducted by exposing negative film in brand new cameras of different make and model. The exposures were repeated at regular time intervals spread over a period of 2 years. The processed film negatives were studied under a stereomicroscope (10-40x) in transmitted illumination for the presence of the characterizing features on their four frame-edges. These features were then related to those present on the masking frame of the cameras by examining the latter in reflected light stereomicroscopy (10-40x). The purpose of the study was to determine the origin and permanence of the frame-edge-marks, and also the processes by which the marks may probably alter with time. The investigations have arrived at the following conclusions: (i) the edge-marks have originated principally from the imperfections received on the film mask from the manufacturing and also occasionally from the accumulated dirt, dust and fiber on the film mask over an extended time period. (ii) The edge profiles of the cameras have remained fixed over a considerable period of time so as to be of a valuable identification medium. (iii) The marks are found to be varying in nature even with those cameras manufactured at similar time. (iv) The influence of f/number and object distance has great effect in the recording of the frame-edge marks during exposure of the film. The above findings would serve as a useful addition to the technique of camera edge-mark comparisons.

  2. Innovative R.E.A. tools for integrated bathymetric survey

    NASA Astrophysics Data System (ADS)

    Demarte, Maurizio; Ivaldi, Roberta; Sinapi, Luigi; Bruzzone, Gabriele; Caccia, Massimo; Odetti, Angelo; Fontanelli, Giacomo; Masini, Andrea; Simeone, Emilio

    2017-04-01

    The REA (Rapid Environmental Assessment) concept is a methodology finalized to acquire environmental information, process them and return in standard paper-chart or standard digital format. Acquired data become thus available for the ingestion or the valorization of the Civilian Protection Emergency Organization or the Rapid Response Forces. The use of Remotely Piloted Aircraft Systems (RPAS) with the miniaturization of multispectral camera or Hyperspectral camera gives to the operator the capability to react in a short time jointly with the capacity to collect a big amount of different data and to deliver a very large number of products. The proposed methodology incorporates data collected from remote and autonomous sensors that acquire data over areas in a cost-effective manner. The hyperspectral sensors are able to map seafloor morphology, seabed structure, depth of bottom surface and an estimate of sediment development. The considerable spectral portions are selected using an appropriate configuration of hyperspectral cameras to maximize the spectral resolution. Data acquired by hyperspectral camera are geo-referenced synchronously to an Attitude and Heading Reference Systems (AHRS) sensor. The data can be subjected to a first step on-board processing of the unmanned vehicle before be transferred through the Ground Control Station (GCS) to a Processing Exploitation Dissemination (PED) system. The recent introduction of Data Distribution Systems (DDS) capabilities in PED allow a cooperative distributed approach to modern decision making. Two platforms are used in our project, a Remote Piloted Aircraft (RPAS) and an Unmanned Surface Vehicle (USV). The two platforms mutually interact to cover a surveyed area wider than the ones that could be covered by the single vehicles. The USV, especially designed to work in very shallow water, has a modular structure and an open hardware and software architecture allowing for an easy installation and integration of various sensors useful for seabed analysis. The very stable platform located on the top of the USV allows for taking-off and landing of the RPAS. By exploiting its higher power autonomy and load capability, the USV will be used as a mothership for the RPAS. In particular, during the missions the USV will be able to furnish recharging possibility for the RPAS and it will be able to function as a bridge for the communication between the RPAS and its control station. The main advantage of the system is the remote acquisition of high-resolution bathymetric data from RPAS in areas where the possibility to have a systematic and traditional survey are few or none. These tools (USV carrying an RPAS with Hyperspectral camera) constitute an innovative and powerful system that gives to the Emergency Response Unit the right instruments to react quickly. The developing of this support could be solve the classical conflict between resolution, needed to capture the fine scale variability and coverage, needed for the large environmental phenomena, with very high variability over a wide range of spatial and temporal scales as the coastal environment.

  3. Linear array of photodiodes to track a human speaker for video recording

    NASA Astrophysics Data System (ADS)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  4. The Grism Lens-Amplified Survey from Space (GLASS). V. Extent and Spatial Distribution of Star Formation in z ~ 0.5 Cluster Galaxies

    NASA Astrophysics Data System (ADS)

    Vulcani, Benedetta; Treu, Tommaso; Schmidt, Kasper B.; Poggianti, Bianca M.; Dressler, Alan; Fontana, Adriano; Bradač, Marusa; Brammer, Gabriel B.; Hoag, Austin; Huang, Kuan-Han; Malkan, Matthew; Pentericci, Laura; Trenti, Michele; von der Linden, Anja; Abramson, Louis; He, Julie; Morris, Glenn

    2015-12-01

    We present the first study of the spatial distribution of star formation in z ˜ 0.5 cluster galaxies. The analysis is based on data taken with the Wide Field Camera 3 as part of the Grism Lens-Amplified Survey from Space (GLASS). We illustrate the methodology by focusing on two clusters (MACS 0717.5+3745 and MACS 1423.8+2404) with different morphologies (one relaxed and one merging) and use foreground and background galaxies as a field control sample. The cluster+field sample consists of 42 galaxies with stellar masses in the range 108-1011 M⊙ and star formation rates in the range 1-20 M⊙ yr-1. Both in clusters and in the field, Hα is more extended than the rest-frame UV continuum in 60% of the cases, consistent with diffuse star formation and inside-out growth. In ˜20% of the cases, the Hα emission appears more extended in cluster galaxies than in the field, pointing perhaps to ionized gas being stripped and/or star formation being enhanced at large radii. The peak of the Hα emission and that of the continuum are offset by less than 1 kpc. We investigate trends with the hot gas density as traced by the X-ray emission, and with the surface mass density as inferred from gravitational lens models, and find no conclusive results. The diversity of morphologies and sizes observed in Hα illustrates the complexity of the environmental processes that regulate star formation. Upcoming analysis of the full GLASS data set will increase our sample size by almost an order of magnitude, verifying and strengthening the inference from this initial data set.

  5. In-vessel visible inspection system on KSTAR

    NASA Astrophysics Data System (ADS)

    Chung, Jinil; Seo, D. C.

    2008-08-01

    To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.

  6. AFRC2016-0116-065

    NASA Image and Video Library

    2016-04-15

    The newest instrument, an infrared camera called the High-resolution Airborne Wideband Camera-Plus (HAWC+), was installed on the Stratospheric Observatory for Infrared Astronomy, SOFIA, in April of 2016. This is the only currently operating astronomical camera that makes images using far-infrared light, allowing studies of low-temperature early stages of star and planet formation. HAWC+ includes a polarimeter, a device that measures the alignment of incoming light waves. With the polarimeter, HAWC+ can map magnetic fields in star forming regions and in the environment around the supermassive black hole at the center of the Milky Way galaxy. These new maps can reveal how the strength and direction of magnetic fields affect the rate at which interstellar clouds condense to form new stars. A team led by C. Darren Dowell at NASA’s Jet Propulsion Laboratory and including participants from more than a dozen institutions developed the instrument.

  7. Recording the synchrotron radiation by a picosecond streak camera for bunch diagnostics in cyclic accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vereshchagin, A K; Vorob'ev, N S; Gornostaev, P B

    2016-02-28

    A PS-1/S1 picosecond streak camera with a linear sweep is used to measure temporal characteristics of synchrotron radiation pulses on a damping ring (DR) at the Budker Institute of Nuclear Physics (BINP) of the Siberian Branch of the Russian Academy of Sciences (Novosibirsk). The data obtained allow a conclusion as to the formation processes of electron bunches and their 'quality' in the DR after injection from the linear accelerator. The expediency of employing the streak camera as a part of an optical diagnostic accelerator complex for adjusting the injection from a linear accelerator is shown. Discussed is the issue ofmore » designing a new-generation dissector with a time resolution up to a few picoseconds, which would allow implementation of a continuous bunch monitoring in the DR during mutual work with the electron-positron colliders at the BINP. (acoustooptics)« less

  8. ON THE ORIGIN OF THE SUPERGIANT H I SHELL AND PUTATIVE COMPANION IN NGC 6822

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, John M.; O'Leary, Erin M.; Weisz, Daniel R.

    2012-03-10

    We present new Hubble Space Telescope Advanced Camera for Surveys imaging of six positions spanning 5.8 kpc of the H I major axis of the Local Group dIrr NGC 6822, including both the putative companion galaxy and the large H I hole. The resulting deep color-magnitude diagrams show that NGC 6822 has formed >50% of its stars in the last {approx}5 Gyr. The star formation histories of all six positions are similar over the most recent 500 Myr, including low-level star formation throughout this interval and a weak increase in star formation rate during the most recent 50 Myr. Stellarmore » feedback can create the giant H I hole, assuming that the lifetime of the structure is longer than 500 Myr; such long-lived structures have now been observed in multiple systems and may be the norm in galaxies with solid-body rotation. The old stellar populations (red giants and red clump stars) of the putative companion are consistent with those of the extended halo of NGC 6822; this argues against the interpretation of this structure as a bona fide interacting companion galaxy and against its being linked to the formation of the H I hole via an interaction. Since there is no evidence in the stellar population of a companion galaxy, the most likely explanation of the extended H I structure in NGC 6822 is a warped disk inclined to the line of sight.« less

  9. Nocturnal low-level jet and low-level cloud occurrence over Southern West Africa during DACCIWA campaign

    NASA Astrophysics Data System (ADS)

    Dione, Cheikh; Lohou, Fabienne; Lothon, Marie; Kaltoff, Norbert; Adler, Bianca; Babić, Karmen; Pedruzo-Bagazgoitia, Xabier

    2017-04-01

    During the summer monsoon period in West Africa, a nocturnal low-level jet (NLLJ) is frequently observed and is associated with the formation of a low-level deck of stratus or stratocumulus clouds over the southern domain of this region. The understanding of the mechanisms controlling the diurnal cycle of the low-level cloud (LLC) is one of the goals of the DACCIWA (Dynamics-aerosol-chemistry-cloud interactions in West Africa) project. During the ground campaign, which took place in June-July 2016, numerous instruments devoted to document the atmospheric boundary-layer dynamics and thermodynamics, clouds, aerosols and precipitation were deployed at Kumasi (Ghana), Savè (Benin) and Ile-Ife (Nigeria) supersites. Several parameters can influence the LLC formation: these are the large-scale conditions, but also local parameters such as stability, the interaction between Monsoon and Harmattan flows and turbulence. It has been pointed out in previous studies that the NLLJ plays a key role in LLC formation. Therefore, based on 49 nights of observations, our study focuses on the possible link between NLLJ and the formation, evolution and dissipation of the LLC over Savè. The characteristics of LLCs (onset, evolution and dissipation time, base height and thickness) are investigated using data from the ceilometer, infrared cloud camera, and frequent and normal radiosoundings. The UHF wind profiler data are used to estimate the occurrence of the NLLJ as well as the depth of the monsoon flow.

  10. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    NASA Astrophysics Data System (ADS)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

  11. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  12. Clinical applications of commercially available video recording and monitoring systems: inexpensive, high-quality video recording and monitoring systems for endoscopy and microsurgery.

    PubMed

    Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko

    2006-01-01

    The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.

  13. A high resolution IR/visible imaging system for the W7-X limiter

    NASA Astrophysics Data System (ADS)

    Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.

    2016-11-01

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.

  14. Wavefront Sensing With Switched Lenses for Defocus Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    In an alternative hardware design for an apparatus used in image-based wavefront sensing, defocus diversity is introduced by means of fixed lenses that are mounted in a filter wheel (see figure) so that they can be alternately switched into a position in front of the focal plane of an electronic camera recording the image formed by the optical system under test. [The terms image-based, wavefront sensing, and defocus diversity are defined in the first of the three immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] Each lens in the filter wheel is designed so that the optical effect of placing it at the assigned position is equivalent to the optical effect of translating the camera a specified defocus distance along the optical axis. Heretofore, defocus diversity has been obtained by translating the imaging camera along the optical axis to various defocus positions. Because data must be taken at multiple, accurately measured defocus positions, it is necessary to mount the camera on a precise translation stage that must be calibrated for each defocus position and/or to use an optical encoder for measurement and feedback control of the defocus positions. Additional latency is introduced into the wavefront sensing process as the camera is translated to the various defocus positions. Moreover, if the optical system under test has a large focal length, the required defocus values are large, making it necessary to use a correspondingly bulky translation stage. By eliminating the need for translation of the camera, the alternative design simplifies and accelerates the wavefront-sensing process. This design is cost-effective in that the filterwheel/lens mechanism can be built from commercial catalog components. After initial calibration of the defocus value of each lens, a selected defocus value is introduced by simply rotating the filter wheel to place the corresponding lens in front of the camera. The rotation of the wheel can be automated by use of a motor drive, and further calibration is not necessary. Because a camera-translation stage is no longer needed, the size of the overall apparatus can be correspondingly reduced.

  15. A high resolution IR/visible imaging system for the W7-X limiter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less

  16. Centralized automated quality assurance for large scale health care systems. A pilot method for some aspects of dental radiography.

    PubMed

    Benn, D K; Minden, N J; Pettigrew, J C; Shim, M

    1994-08-01

    President Clinton's Health Security Act proposes the formation of large scale health plans with improved quality assurance. Dental radiography consumes 4% ($1.2 billion in 1990) of total dental expenditure yet regular systematic office quality assurance is not performed. A pilot automated method is described for assessing density of exposed film and fogging of unexposed processed film. A workstation and camera were used to input intraoral radiographs. Test images were produced from a phantom jaw with increasing exposure times. Two radiologists subjectively classified the images as too light, acceptable, or too dark. A computer program automatically classified global grey level histograms from the test images as too light, acceptable, or too dark. The program correctly classified 95% of 88 clinical films. Optical density of unexposed film in the range 0.15 to 0.52 measured by computer was reliable to better than 0.01. Further work is needed to see if comprehensive centralized automated radiographic quality assurance systems with feedback to dentists are feasible, are able to improve quality, and are significantly cheaper than conventional clerical methods.

  17. Replacing a technology - The Large Space Telescope and CCDs

    NASA Astrophysics Data System (ADS)

    Smith, R. W.; Tatarewicz, J. H.

    1985-07-01

    The technological improvements, design choices and mission goals which led to the inclusion of CCD detectors in the wide field camera of the Large Space Telescope (LST) to be launched by the STS are recounted. Consideration of CCD detectors began before CCDs had seen wide astronomical applications. During planning for the ST, in the 1960s, photographic methods and a vidicon were considered, and seemed feasible provided that periodic manual maintenance could be performed. The invention of CCDs was first reported in 1970 and by 1973 the CCDs were receiving significant attention as potential detectors instead of a vidicon, which retained its own technological challenges. The CCD format gained new emphasis when success was achieved in developments for planetary-imaging spacecraft. The rapidity of progress in CCD capabilities, coupled with the continued shortcomings of the vidicon, resulted in a finalized choice for a CCD device by 1977. The decision was also prompted by continuing commercial and military interest in CCDs, which was spurring the development of the technology and improving the sensitivities and reliability while lowering the costs.

  18. Commissioning and Science Verification of JAST/T80

    NASA Astrophysics Data System (ADS)

    Ederoclte, A.; Cenarro, A. J.; Marín-Franch, A.; Cristóbal-Hornillos, D.; Vázquez Ramió, H.; Varela, J.; Hurier, G.; Moles, M.; Lamadrid, J. L.; Díaz-Martín, M. C.; Iglesias Marzoa, R.; Tilve, V.; Rodríguez, S.; Maícas, N.; Abri, J.

    2017-03-01

    Located at the Observatorio Astrofísico de Javalambre, the ’’Javalambre Auxiliary Survey Telescope’’ is an 80cm telescope with a unvignetted 2 square degrees field of view. The telescope is equipped with T80Cam, a camera with a large format CCD and two filter wheels which can host, at any given time, 12 filters. The telescope has been designed to provide optical quality all across the field of view, which is achieved with a field corrector. In this talk, I will review the commissioning of the telescope. The optical performance in the centre of the field of view has been tested with lucky imaging technique, providing a telescope PSF of 0.4’’, which is close to the one expected from theory. Moreover, the tracking of the telescope does not affect the image quality, as it has been shown that stars appear round even in exposures of 10minutes obtained without guiding. Most importantly, we present the preliminary results of science verification observations which combine the two main characteristics of this telescope: the large field of view and the special filter set.

  19. Large-format platinum silicide microwave kinetic inductance detectors for optical to near-IR astronomy.

    PubMed

    Szypryt, P; Meeker, S R; Coiffard, G; Fruitwala, N; Bumble, B; Ulbricht, G; Walter, A B; Daal, M; Bockstiegel, C; Collura, G; Zobrist, N; Lipartito, I; Mazin, B A

    2017-10-16

    We have fabricated and characterized 10,000 and 20,440 pixel Microwave Kinetic Inductance Detector (MKID) arrays for the Dark-speckle Near-IR Energy-resolved Superconducting Spectrophotometer (DARKNESS) and the MKID Exoplanet Camera (MEC). These instruments are designed to sit behind adaptive optics systems with the goal of directly imaging exoplanets in a 800-1400 nm band. Previous large optical and near-IR MKID arrays were fabricated using substoichiometric titanium nitride (TiN) on a silicon substrate. These arrays, however, suffered from severe non-uniformities in the TiN critical temperature, causing resonances to shift away from their designed values and lowering usable detector yield. We have begun fabricating DARKNESS and MEC arrays using platinum silicide (PtSi) on sapphire instead of TiN. Not only do these arrays have much higher uniformity than the TiN arrays, resulting in higher pixel yields, they have demonstrated better spectral resolution than TiN MKIDs of similar design. PtSi MKIDs also do not display the hot pixel effects seen when illuminating TiN on silicon MKIDs with photons with wavelengths shorter than 1 µm.

  20. Concave Surround Optics for Rapid Multi-View Imaging

    DTIC Science & Technology

    2006-11-01

    thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically

  1. Organize Your Digital Photos: Display Your Images Without Hogging Hard-Disk Space

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    According to InfoTrends/CAP Ventures, by the end of this year more than 55 percent of all U.S. households will own at least one digital camera. With so many digital cameras in use, it is important for people to understand how to organize and store digital images in ways that make them easy to find. Additionally, today's affordable, large megapixel…

  2. A modular positron camera for the study of industrial processes

    NASA Astrophysics Data System (ADS)

    Leadbeater, T. W.; Parker, D. J.

    2011-10-01

    Positron imaging techniques rely on the detection of the back-to-back annihilation photons arising from positron decay within the system under study. A standard technique, called positron emitting particle tracking (PEPT) [1], uses a number of these detected events to rapidly determine the position of a positron emitting tracer particle introduced into the system under study. Typical applications of PEPT are in the study of granular and multi-phase materials in the disciplines of engineering and the physical sciences. Using components from redundant medical PET scanners a modular positron camera has been developed. This camera consists of a number of small independent detector modules, which can be arranged in custom geometries tailored towards the application in question. The flexibility of the modular camera geometry allows for high photon detection efficiency within specific regions of interest, the ability to study large and bulky systems and the application of PEPT to difficult or remote processes as the camera is inherently transportable.

  3. Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation

    NASA Astrophysics Data System (ADS)

    Inamoto, Naho; Saito, Hideo

    2003-06-01

    This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..

  4. 3D surface pressure measurement with single light-field camera and pressure-sensitive paint

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth

    2018-05-01

    A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.

  5. GROUND-BASED Paα NARROW-BAND IMAGING OF LOCAL LUMINOUS INFRARED GALAXIES. I. STAR FORMATION RATES AND SURFACE DENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tateuchi, Ken; Konishi, Masahiro; Motohara, Kentaro

    2015-03-15

    Luminous infrared galaxies (LIRGs) are enshrouded by a large amount of dust produced by their active star formation, and it is difficult to measure their activity in optical wavelengths. We have carried out Paα narrow-band imaging observations of 38 nearby star forming galaxies including 33 LIRGs listed in the IRAS Revised Bright Galaxy Sample catalog with the Atacama Near InfraRed camera on the University of Tokyo Atacama Observatory (TAO) 1.0 m telescope (miniTAO). Star formation rates (SFRs) estimated from the Paα fluxes, corrected for dust extinction using the Balmer decrement method (typically A{sub V} ∼ 4.3 mag), show a good correlation with thosemore » from the bolometric infrared luminosity of the IRAS data within a scatter of 0.27 dex. This suggests that the correction of dust extinction for the Paα flux is sufficient in our sample. We measure the physical sizes and surface densities of infrared luminosities (Σ{sub L(IR)}) and the SFR (Σ{sub SFR}) of star forming regions for individual galaxies, and we find that most of the galaxies follow a sequence of local ultra-luminous or luminous infrared galaxies (U/LIRGs) on the L(IR)-Σ{sub L(IR)} and SFR-Σ{sub SFR} plane. We confirm that a transition of the sequence from normal galaxies to U/LIRGs is seen at L(IR) = 8 × 10{sup 10} L {sub ☉}. Also, we find that there is a large scatter in physical size, different from normal galaxies or ULIRGs. Considering the fact that most U/LIRGs are merging or interacting galaxies, this scatter may be caused by strong external factors or differences in their merging stages.« less

  6. The applicability of space imagery to the small-scale topographic mapping of developing countries: A case study — the Sudan

    NASA Astrophysics Data System (ADS)

    Petrie, G.; El Niweiri, A. E. H.

    After reviewing the current status of topographic mapping in Sudan, the paper considers the possible applications of space inagery to the topographic mapping of the country at 1 : 100,000 scale. A comprehensive series of tests of the geometric accuracy and interpretability of six types of space imagery taken by the Landsat MSS, RBV and TM sensors, the MOMS scanner, the ESA Metric Camera and NASA's Large Format Camera have been conducted over a test area established in the Red Sea Hills area of Sudan supplemented by further interpretation tests carried out over the area of Khartoum and the Gezira. The results of these tests are given together with those from comparative tests carried out with other images acquired by the same sensors over test areas in developed countries (UK and USA). Further collateral information on topographic mapping at 1 : 100,000 scale from SPOT imagery has been provided by the Ordnance Survey based on its tests and experience in North Yemen. The paper concludes with an analysis of the possibilities of mapping the main (non-equatorial) area of Sudan at 1 : 100,000 scale based on the results of the extensive series of tests reported in the paper and elsewhere. Consideration is also given to the infrastructure required to support such a programme.

  7. Imaging and structural analysis of the Geyser field, Iceland, from underwater and drone based photogrammetry

    NASA Astrophysics Data System (ADS)

    Walter, Thomas R.; Jousset, Philippe; Allahbakhshi, Massoud; Witt, Tanja; Gudmundsson, Magnus T.; Pall Hersir, Gylfi

    2017-04-01

    The Haukadalur thermal area, southwestern Iceland, is composed of a large number of individual thermal springs, geysers and hot pots that are roughly elongated in a north-south direction. The Haukadalur field is located on the eastern slope of a hill, that is structurally delimited by fissures associated with the Western Volcanic Zone. A detailed analysis on the spatial distribution, structural relations and permeability in the Haukadalur thermal area remained to be carried out. By use of high resolution unmanned aerial vehicle (UAV) based optical and radiometric infrared cameras, we are able to identify over 350 distinct thermal spots distributed in distinct areas. Close analysis of their arrangement yields a preferred direction that is found to be consistent with the assumed tectonic trend in the area. Furthermore by using thermal isolated deep underwater cameras we are able to obtain images from the two largest geysers. Geysir, name giving for all geysers in the world, and Strokkur at depths exceeding 20 m. Near to the surface, the conduit of the geysers are near circular, but at a depth the shape changes into a crack-like elongated fissure. In this presentation we discuss the structural relationship of the deeper and shallower parts of these geysers and elaborate on the conditions of geyser and hot pot formations, with general relevance also for other thermal fields elsewhere.

  8. World's fastest and most sensitive astronomical camera

    NASA Astrophysics Data System (ADS)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these corrections to be done at an even higher rate, more than one thousand times a second, and this is where OCam is essential. "The quality of the adaptive optics correction strongly depends on the speed of the camera and on its sensitivity," says Philippe Feautrier from the LAOG, France, who coordinated the whole project. "But these are a priori contradictory requirements, as in general the faster a camera is, the less sensitive it is." This is why cameras normally used for very high frame-rate movies require extremely powerful illumination, which is of course not an option for astronomical cameras. OCam and its CCD220 detector, developed by the British manufacturer e2v technologies, solve this dilemma, by being not only the fastest available, but also very sensitive, making a significant jump in performance for such cameras. Because of imperfect operation of any physical electronic devices, a CCD camera suffers from so-called readout noise. OCam has a readout noise ten times smaller than the detectors currently used on the VLT, making it much more sensitive and able to take pictures of the faintest of sources. "Thanks to this technology, all the new generation instruments of ESO's Very Large Telescope will be able to produce the best possible images, with an unequalled sharpness," declares Jean-Luc Gach, from the Laboratoire d'Astrophysique de Marseille, France, who led the team that built the camera. "Plans are now underway to develop the adaptive optics detectors required for ESO's planned 42-metre European Extremely Large Telescope, together with our research partners and the industry," says Hubin. Using sensitive detectors developed in the UK, with a control system developed in France, with German and Spanish participation, OCam is truly an outcome of a European collaboration that will be widely used and commercially produced. More information The three French laboratories involved are the Laboratoire d'Astrophysique de Marseille (LAM/INSU/CNRS, Université de Provence; Observatoire Astronomique de Marseille Provence), the Laboratoire d'Astrophysique de Grenoble (LAOG/INSU/CNRS, Université Joseph Fourier; Observatoire des Sciences de l'Univers de Grenoble), and the Observatoire de Haute Provence (OHP/INSU/CNRS; Observatoire Astronomique de Marseille Provence). OCam and the CCD220 are the result of five years work, financed by the European commission, ESO and CNRS-INSU, within the OPTICON project of the 6th Research and Development Framework Programme of the European Union. The development of the CCD220, supervised by ESO, was undertaken by the British company e2v technologies, one of the world leaders in the manufacture of scientific detectors. The corresponding OPTICON activity was led by the Laboratoire d'Astrophysique de Grenoble, France. The OCam camera was built by a team of French engineers from the Laboratoire d'Astrophysique de Marseille, the Laboratoire d'Astrophysique de Grenoble and the Observatoire de Haute Provence. In order to secure the continuation of this successful project a new OPTICON project started in June 2009 as part of the 7th Research and Development Framework Programme of the European Union with the same partners, with the aim of developing a detector and camera with even more powerful functionality for use with an artificial laser star. This development is necessary to ensure the image quality of the future 42-metre European Extremely Large Telescope. ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".

  9. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.

    PubMed

    Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.

  10. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research

    PubMed Central

    Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444

  11. Photo-Machining of Semiconductor Related Materials with Femtosecond Laser Ablation and Characterization of Its Properties

    NASA Astrophysics Data System (ADS)

    Yokotani, Atushi; Mizuno, Toshio; Mukumoto, Toru; Kawahara, Kousuke; Ninomiya, Takahumi; Sawada, Hiroshi; Kurosawa, Kou

    We have analyzed the drilling process with femtosecond laser on the silicon surface in order to investigate a degree of thermal effect during the dicing process of the very thin silicon substrate. A regenerative amplified Ti:Al2O3 laser (E= 30˜500 μJ/pulse, τ= 200 fs, λ= 780 nm, f= 10 Hz) was used and focused onto a 50 μm-thick silicon sample. ICCD (Intensified Charge coupled Device) camera with a high-speed gate of 5 ns was utilized to take images of processing hole. First, we investigated the dependence of laser energy on the speed of the formation of the drilled hole. As a result, it was found that the lager the energy, the slower the speed of the formation under the minimum hole was obtained. Consequently, in the case of defocused condition, even when the smaller the energy density was used, the very slow speed of formation and the much lager thermal effects are simultaneously observed. So we can say that the degree of the thermal effects is not simply related to energy density of the laser but strongly related to the speed of the formation, which can be measured by the ICCD camera. The similar tendency was also obtained for other materials, which are important for the fabrication of ICs (Al, Cu, SiO2 and acrylic resin).

  12. Hyper Suprime-Camera Survey of the Akari NEP Wide Field

    NASA Astrophysics Data System (ADS)

    Goto, Tomotsugu; Toba, Yoshiki; Utsumi, Yousuke; Oi, Nagisa; Takagi, Toshinobu; Malkan, Matt; Ohayma, Youichi; Murata, Kazumi; Price, Paul; Karouzos, Marios; Matsuhara, Hideo; Nakagawa, Takao; Wada, Takehiko; Serjeant, Steve; Burgarella, Denis; Buat, Veronique; Takada, Masahiro; Miyazaki, Satoshi; Oguri, Masamune; Miyaji, Takamitsu; Oyabu, Shinki; White, Glenn; Takeuchi, Tsutomu; Inami, Hanae; Perason, Chris; Malek, Katarzyna; Marchetti, Lucia; Lee, Hyung Mok; Im, Myung; Kim, Seong Jin; Koptelova, Ekaterina; Chao, Dani; Wu, Yi-Han; AKARI NEP Survey Team; AKARI All Sky Survey Team

    2017-03-01

    The extragalactic background suggests half the energy generated by stars was reprocessed into the infrared (IR) by dust. At z ∼1.3, 90% of star formation is obscured by dust. To fully understand the cosmic star formation history, it is critical to investigate infrared emission. AKARI has made deep mid-IR observation using its continuous 9-band filters in the NEP field (5.4 deg^2), using ∼10% of the entire pointed observations available throughout its lifetime. However, there remain 11,000 AKARI infrared sources undetected with the previous CFHT/Megacam imaging (r ∼25.9ABmag). Redshift and IR luminosity of these sources are unknown. These sources may contribute significantly to the cosmic star-formation rate density (CSFRD). For example, if they all lie at 1 < z < 2, the CSFRD will be twice as high at the epoch. We are carrying out deep imaging of the NEP field in 5 broad bands (g,r,i,z, and y) using Hyper Suprime-Camera (HSC), which has 1.5 deg field of view in diameter on Subaru 8m telescope. This will provide photometric redshift information, and thereby IR luminosity for the previously-undetected 11,000 faint AKARI IR sources. Combined with AKARI's mid-IR AGN/SF diagnosis, and accurate mid-IR luminosity measurement, this will allow a complete census of cosmic star-formation/AGN accretion history obscured by dust.

  13. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    NASA Astrophysics Data System (ADS)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  14. VizieR Online Data Catalog: gr photometry of Sextans A and Sextans B (Bellazzini+, 2014)

    NASA Astrophysics Data System (ADS)

    Bellazzini, M.; Beccari, G.; Fraternali, F.; Oosterloo, T. A.; Sollima, A.; Testa, V.; Galleti, S.; Perina, S.; Faccini, M.; Cusano, F.

    2014-04-01

    The tables present deep LBT/LBC g and r photometry of the stars having image quality parameters (provided by DAOPHOTII) CHI<=2 and SHARP within magnitude-dependent contours traced to include the bulk of stellar objects. The observations were achieved on the night og 2012-02-21 with the Large Binocular Camera at the Large Binocular Telescope in binocular mode; g images were acquired with the blue arm and r images with the red arm of the telescope/camera. The astrometry and the photometry were calibrated with stars in common with SDSS-DR9 (V/139). (2 data files).

  15. The Large Synoptic Survey Telescope (LSST) Camera

    ScienceCinema

    None

    2018-06-13

    Ranked as the top ground-based national priority for the field for the current decade, LSST is currently under construction in Chile. The U.S. Department of Energy’s SLAC National Accelerator Laboratory is leading the construction of the LSST camera – the largest digital camera ever built for astronomy. SLAC Professor Steven M. Kahn is the overall Director of the LSST project, and SLAC personnel are also participating in the data management. The National Science Foundation is the lead agency for construction of the LSST. Additional financial support comes from the Department of Energy and private funding raised by the LSST Corporation.

  16. Low, slow, small target recognition based on spatial vision network

    NASA Astrophysics Data System (ADS)

    Cheng, Zhao; Guo, Pei; Qi, Xin

    2018-03-01

    Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.

  17. High-resolution hyperspectral ground mapping for robotic vision

    NASA Astrophysics Data System (ADS)

    Neuhaus, Frank; Fuchs, Christian; Paulus, Dietrich

    2018-04-01

    Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light's spectrum in each of the camera's pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.

  18. Experimental investigation of strain errors in stereo-digital image correlation due to camera calibration

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan

    2018-03-01

    The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.

  19. Scintillator-fiber charged particle track-imaging detector

    NASA Technical Reports Server (NTRS)

    Binns, W. R.; Israel, M. H.; Klarmann, J.

    1983-01-01

    A scintillator-fiber charged-particle track-imaging detector was developed using a bundle of square cross section plastic scintillator fiber optics, proximity focused onto an image intensified charge injection device (CID) camera. The tracks of charged particle penetrating into the scintillator fiber bundle are projected onto the CID camera and the imaging information is read out in video format. The detector was exposed to beams of 15 MeV protons and relativistic Neon, Manganese, and Gold nuclei and images of their tracks were obtained. Details of the detector technique, properties of the tracks obtained, and preliminary range measurements of 15 MeV protons stopping in the fiber bundle are presented.

  20. Digest of NASA earth observation sensors

    NASA Technical Reports Server (NTRS)

    Drummond, R. R.

    1972-01-01

    A digest of technical characteristics of remote sensors and supporting technological experiments uniquely developed under NASA Applications Programs for Earth Observation Flight Missions is presented. Included are camera systems, sounders, interferometers, communications and experiments. In the text, these are grouped by types, such as television and photographic cameras, lasers and radars, radiometers, spectrometers, technology experiments, and transponder technology experiments. Coverage of the brief history of development extends from the first successful earth observation sensor aboard Explorer 7 in October, 1959, through the latest funded and flight-approved sensors under development as of October 1, 1972. A standard resume format is employed to normalize and mechanize the information presented.

  1. Electronic heterodyne recording of interference patterns

    NASA Technical Reports Server (NTRS)

    Merat, F. L.; Claspy, P. C.

    1979-01-01

    An electronic heterodyne technique is being investigated for video (i.e., television rate and format) recording of interference patterns. In the heterodyne technique electro-optic modulation is used to introduce a sinusoidal phase shift between the beams of an interferometer. For phase modulation frequencies between 0.1 and 15 MHz an image dissector camera may be used to scan the resulting temporally modulated interference pattern. Heterodyne detection of the camera output is used to selectively record the interference pattern. An advantage of such synchronous recording is that it permits recording of low-contrast fringes in high ambient light conditions. The application of this technique to the recording of holograms is discussed.

  2. Viking lander imaging investigation during extended and continuation automatic missions. Volume 2: Lander 2 picture catalog of experiment data record

    NASA Technical Reports Server (NTRS)

    Jones, K. L.; Henshaw, M.; Mcmenomy, C.; Robles, A.; Scribner, P. C.; Wall, S. D.; Wilson, J. W.

    1981-01-01

    Images returned by the two Viking landers during the extended and continuation automatic phases of the Viking Mission are presented. Information describing the conditions under which the images were acquired is included with skyline drawings showing the images positioned in the field of view of the cameras. Subsets of the images are listed in a variety of sequences to aid in locating images of interest. The format and organization of the digital magnetic tape storage of the images are described. A brief description of the mission and the camera system is also included.

  3. Viking lander imaging investigation during extended and continuation automatic missions. Volume 1: Lander 1 picture catalog of experiment data record

    NASA Technical Reports Server (NTRS)

    Jones, K. L.; Henshaw, M.; Mcmenomy, C.; Robles, A.; Scribner, P. C.; Wall, S. D.; Wilson, J. W.

    1981-01-01

    All images returned by Viking Lander 1 during the extended and continuation automatic phases of the Viking Mission are presented. Listings of supplemental information which describe the conditions under which the images were acquired are included together with skyline drawings which show where the images are positioned in the field of view of the cameras. Subsets of the images are listed in a variety of sequences to aid in locating images of interest. The format and organization of the digital magnetic tape storage of the images are described as well as the mission and the camera system.

  4. Paint and Shoot

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Through an initial SBIR contract with Langley Research Center, Stress Photonics, Inc. was able to successfully market their thermal strain measurement device, known as the Delta Therm 1000. The company was able to further its research on structural integrity analysis by signing another contract with Langley, this time a STTR contract, to develop its polariscope stress technology. Their commercial polariscope, the GFP 1000, involves a single rotating optical element and a digital camera for full-field image acquisition. The digital camera allows automated data to be acquired quickly and efficiently. Software analysis presents the data in an easy to interpret image format, depicting the magnitude of the shear strains and the directions of the principal strains.

  5. Simultaneous in-plane and out-of-plane displacement measurement based on a dual-camera imaging system and its application to inspection of large-scale space structures

    NASA Astrophysics Data System (ADS)

    Ri, Shien; Tsuda, Hiroshi; Yoshida, Takeshi; Umebayashi, Takashi; Sato, Akiyoshi; Sato, Eiichi

    2015-07-01

    Optical methods providing full-field deformation data have potentially enormous interest for mechanical engineers. In this study, an in-plane and out-of-plane displacement measurement method based on a dual-camera imaging system is proposed. The in-plane and out-of-plane displacements are determined simultaneously using two measured in-plane displacement data observed from two digital cameras at different view angles. The fundamental measurement principle and experimental results of accuracy confirmation are presented. In addition, we applied this method to the displacement measurement in a static loading and bending test of a solid rocket motor case (CFRP material; 2.2 m diameter and 2.3 m long) for an up-to-date Epsilon rocket developed by JAXA. The effectiveness and measurement accuracy is confirmed by comparing with conventional displacement sensor. This method could be useful to diagnose the reliability of large-scale space structures in the rocket development.

  6. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  7. Development of the SEASIS instrument for SEDSAT

    NASA Technical Reports Server (NTRS)

    Maier, Mark W.

    1996-01-01

    Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.

  8. Science with the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.

    2012-01-01

    The James Webb Space Telescope is the scientific successor to the Hubble and Spitzer Space Telescopes. It will be a large (6.6m) cold (50K) telescope launched into orbit around the second Earth-Sun lagrange point. It is a partnership of NASA with the European and Canadian Space Agencies. The science goals for JWST include the formation of the first stars and galaxies in the early universe; the chemical, morphological and dynamical buildup of galaxies and the formation of stars and planetary systems. Recently, the goals have expanded to include studies of dark energy, dark matter, active galactic nuclei, exoplanets and Solar System objects. Webb will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Near-Infrared Imager and Slitiess Spectrograph will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. The observatory is confirmed for launch in 2018; the design is complete and it is in its construction phase. Recent progress includes the completion of the mirrors, the delivery of the first flight instrument(s) and the start of the integration and test phase.

  9. The James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.

    2012-01-01

    The James Webb Space Telescope is the scientific successor to the Hubble and Spitzer Space Telescopes. It will be a large (6.6m) cold (SDK) telescope launched into orbit around the second Earth-Sun Lagrange point. It is a partnership of NASA with the European and Canadian Space Agencies. The science goals for JWST include the formation of the first stars and galaxies in the early universe; the chemical, morphological and dynamical buildup of galaxies and the formation of stars and planetary systems. Recently, the goals have expanded to include studies of dark energy, dark matter, active galactic nuclei, exoplanets and Solar System objects. Webb will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Near-Infrared Imager and Slitless Spectrograph will cover the wavelength range 0.6 to S microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. The observatory is confirmed for launch in 2018; the design is complete and it is in its construction phase. Recent progress includes the completion of the mirrors, the delivery of the first flight instruments and the start of the integration and test phase.

  10. The Sculptured Hills of the Taurus Highlands: Implications for the relative age of Serenitatis, basin chronologies and the cratering history of the Moon

    USGS Publications Warehouse

    Spudis, P.D.; Wilhelms, D.E.; Robinson, M.S.

    2011-01-01

    New images from the Lunar Reconnaissance Orbiter Camera show the distribution and geological relations of the Sculptured Hills, a geological unit widespread in the highlands between the Serenitatis and Crisium basins. The Sculptured Hills shows knobby, undulating, radially textured, and plains-like morphologies and in many places is indistinguishable from the similarly knobby Alpes Formation, a facies of ejecta from the Imbrium basin. The new LROC image data show that the Sculptured Hills in the Taurus highlands is Imbrium ejecta and not directly related to the formation of the Serenitatis basin. This occurrence and the geological relations of this unit suggests that the Apollo 17 impact melts may not be not samples of the Serenitatis basin-forming impact, leaving their provenance undetermined and origin unexplained. If the Apollo 17 melt rocks are Serenitatis impact melt, up to half of the basin and large crater population of the Moon was created within a 30 Ma interval around 3.8 Ga in a global impact "cataclysm." Either interpretation significantly changes our view of the impact process and history of the Earth-Moon system. Copyright 2011 by the American Geophysical Union.

  11. Demonstration of a High-Fidelity Predictive/Preview Display Technique for Telerobotic Servicing in Space

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Bejczy, Antal K.

    1993-01-01

    A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.

  12. Acapulco, Mexico taken with electronic still camera

    NASA Image and Video Library

    1995-10-29

    STS073-E-5275 (3 Nov. 1995) --- Resort City of Acapulco appears in this north-looking view, photographed from the Earth-orbiting space shuttle Columbia with the Electronic Still Camera (ESC). The airport lies on a narrow neck of land between the sea and a large coastal lagoon. This mission marks the first time NASA has released in mid-flight electronically-downlinked color images that feature geographic subject matter.

  13. Fabrication of large dual-polarized multichroic TES bolometer arrays for CMB measurements with the SPT-3G camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Posada, C. M.; Ade, P. A. R.; Ahmed, Z.

    2015-08-11

    This work presents the procedures used by Argonne National Laboratory to fabricate large arrays of multichroic transition-edge sensor (TES) bolometers for cosmic microwave background (CMB) measurements. These detectors will be assembled into the focal plane for the SPT-3G camera, the third generation CMB camera to be installed in the South Pole Telescope. The complete SPT-3G camera will have approximately 2690 pixels, for a total of 16,140 TES bolometric detectors. Each pixel is comprised of a broad-band sinuous antenna coupled to a Nb microstrip line. In-line filters are used to define the different band-passes before the millimeter-wavelength signal is fed tomore » the respective Ti/Au TES bolometers. There are six TES bolometer detectors per pixel, which allow for measurements of three band-passes (95 GHz, 150 GHz and 220 GHz) and two polarizations. The steps involved in the monolithic fabrication of these detector arrays are presented here in detail. Patterns are defined using a combination of stepper and contact lithography. The misalignment between layers is kept below 200 nm. The overall fabrication involves a total of 16 processes, including reactive and magnetron sputtering, reactive ion etching, inductively coupled plasma etching and chemical etching.« less

  14. San Juan National Forest Land Management Planning Support System (LMPSS) requirements definition

    NASA Technical Reports Server (NTRS)

    Werth, L. F. (Principal Investigator)

    1981-01-01

    The role of remote sensing data as it relates to a three-component land management planning system (geographic information, data base management, and planning model) can be understood only when user requirements are known. Personnel at the San Juan National Forest in southwestern Colorado were interviewed to determine data needs for managing and monitoring timber, rangelands, wildlife, fisheries, soils, water, geology and recreation facilities. While all the information required for land management planning cannot be obtained using remote sensing techniques, valuable information can be provided for the geographic information system. A wide range of sensors such as small and large format cameras, synthetic aperture radar, and LANDSAT data should be utilized. Because of the detail and accuracy required, high altitude color infrared photography should serve as the baseline data base and be supplemented and updated with data from the other sensors.

  15. An Accreting Protoplanet: Confirmation and Characterization of LkCa15b

    NASA Astrophysics Data System (ADS)

    Follette, Katherine; Close, Laird; Males, Jared; Macintosh, Bruce; Sallum, Stephanie; Eisner, Josh; Kratter, Kaitlin M.; Morzinski, Katie; Hinz, Phil; Weinberger, Alycia; Rodigas, Timothy J.; Skemer, Andrew; Bailey, Vanessa; Vaz, Amali; Defrere, Denis; spalding, eckhart; Tuthill, Peter

    2015-12-01

    We present a visible light adaptive optics direct imaging detection of a faint point source separated by just 93 milliarcseconds (~15 AU) from the young star LkCa 15. Using Magellan AO's visible light camera in Simultaneous Differential Imaging (SDI) mode, we imaged the star at Hydrogen alpha and in the neighboring continuum as part of the Giant Accreting Protoplanet Survey (GAPplanetS) in November 2015. The continuum images provide a sensitive and simultaneous probe of PSF residuals and instrumental artifacts, allowing us to isolate H-alpha accretion luminosity from the LkCa 15b protoplanet, which lies well inside of the LkCa15 transition disk gap. This detection, combined with a nearly simultaneous near-infrared detection with the Large Binocular Telescope, provides an unprecedented glimpse at a planetary system during epoch of planet formation. [Nature result in press. Please embargo until released

  16. IAE - Inflatable Antenna Experiment

    NASA Image and Video Library

    1996-05-20

    STS077-150-094 (20 May 1996) --- Following its deployment from the Space Shuttle Endeavour, the Spartan 207/Inflatable Antenna Experiment (IAE) payload is backdropped over the Mississippi River and metropolitan St. Louis. The metropolitan area lies just below the gold-colored Spartan at bottom of photo. The view was photographed with a large format still camera on the first full day of in-space operations by the six-member crew. Managed by Goddard Space Flight Center (GSFC), Spartan is designed to provide short-duration, free-flight opportunities for a variety of scientific studies. The Spartan configuration on this flight is unique in that the IAE is part of an additional separate unit which is ejected once the experiment is completed. The IAE experiment will lay the groundwork for future technology development in inflatable space structures, which will be launched and then inflated like a balloon on-orbit.

  17. IAE - Inflatable Antenna Experiment

    NASA Image and Video Library

    1996-05-20

    STS077-150-129 (20 May 1996) --- Following its deployment from the Space Shuttle Endeavour, the Spartan 207/Inflatable Antenna Experiment (IAE) payload is backdropped over the Atlantic Ocean and Hampton Roads, Virginia. (Hold photograph vertically with land mass at top.) Virginia Beach and part of Newport News can be delineated in the upper left quadrant of the frame. The view was photographed with a large format still camera on the first full day of in-space operations by the six-member crew. Managed by Goddard Space Flight Center (GSFC), Spartan is designed to provide short-duration, free-flight opportunities for a variety of scientific studies. The Spartan configuration on this flight is unique in that the IAE is part of an additional separate unit which is ejected once the experiment is completed. The IAE experiment will lay the groundwork for future technology development in inflatable space structures, which will be launched and then inflated like a balloon on-orbit.

  18. American Society for Photogrammetry and Remote Sensing and ACSM, Fall Convention, Reno, NV, Oct. 4-9, 1987, ASPRS Technical Papers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    Recent advances in remote-sensing technology and applications are examined in reviews and reports. Topics addressed include the use of Landsat TM data to assess suspended-sediment dispersion in a coastal lagoon, the use of sun incidence angle and IR reflectance levels in mapping old-growth coniferous forests, information-management systems, Large-Format-Camera soil mapping, and the economic potential of Landsat TM winter-wheat crop-condition assessment. Consideration is given to measurement of ephemeral gully erosion by airborne laser ranging, the creation of a multipurpose cadaster, high-resolution remote sensing and the news media, the role of vegetation in the global carbon cycle, PC applications in analytical photogrammetry,more » multispectral geological remote sensing of a suspected impact crater, fractional calculus in digital terrain modeling, and automated mapping using GP-based survey data.« less

  19. Extreme Emission Line Galaxies in CANDELS: Broad-Band Selected, Star-Bursting Dwarf Galaxies at Z greater than 1

    NASA Technical Reports Server (NTRS)

    VanDerWel, A.; Straughn, A. N.; Rix, H.-W.; Finkelstein, S. L.; Koekemoer, A. M.; Weiner, B. J.; Wuyts, S.; Bell, E. F.; Faber, S. M.; Trump, J. R.; hide

    2011-01-01

    We identify an abundant population of extreme emission line galaxies at redshift z=1.6 - 1.8 in the Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) imaging from Hubble Space Telescope/Wide Field Camera 3 (HST/WFC3). 69 candidates are selected by the large contribution of exceptionally bright emission lines to their near-infrared, broad-band fluxes. Supported by spectroscopic confirmation of strong [OIII] emission lines - with equivalent widths approximately 1000A - in the four candidates that have HST/WFC3 grism observations, we conclude that these objects are dwarf galaxies with approximately 10(exp 8) solar mass in stellar mass, undergoing an enormous star-burst phase with M*/M* of only approximately 10 Myr. The star formation activity and the co-moving number density (3.7 x 10(exp -4) Mpc(exp -3)) imply that strong, short-lived bursts play a significant, perhaps even dominant role in the formation and evolution of dwarf galaxies at z greater than 1. The observed star formation activity can produce in less than 5 Gyr the same amount of stellar mass density as is presently contained in dwarf galaxies. Therefore, our observations provide a strong indication that the stellar populations of present-day dwarf galaxies formed mainly in strong, short-lived bursts, mostly at z greater than 1.

  20. Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.

    2009-01-01

    The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts z greater than 6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z greater than 10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (less than 50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth-Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems, and discuss recent progress in constructing the observatory.

  1. Creative Film-Making.

    ERIC Educational Resources Information Center

    Smallman, Kirk

    The fundamentals of motion picture photography are introduced with a physiological explanation for the illusion of motion in a film. Film stock formats and emulsions, camera features, and lights are listed and described. Various techniques of exposure control are illustrated in terms of their effects. Photographing action with a stationary or a…

  2. Project Physics Handbook 4, Light and Electromagnetism.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Seven experiments and 40 activities are presented in this handbook. The experiments are related to Young's experiment, electric forces, forces on currents, electron-beam tubes, and wave modulation and communication. The activities are primarily concerned with aspects of scattered and polarized light, colors, image formation, lenses, cameras,…

  3. Multitemporal observations of identical active dust devils on Mars with the High Resolution Stereo Camera (HRSC) and Mars Orbiter Camera (MOC)

    NASA Astrophysics Data System (ADS)

    Reiss, D.; Zanetti, M.; Neukum, G.

    2011-09-01

    Active dust devils were observed in Syria Planum in Mars Observer Camera - Wide Angle (MOC-WA) and High Resolution Stereo Camera (HRSC) imagery acquired on the same day with a time delay of ˜26 min. The unique operating technique of the HRSC allowed the measurement of the traverse velocities and directions of motion. Large dust devils observed in the HRSC image could be retraced to their counterparts in the earlier acquired MOC-WA image. Minimum lifetimes of three large (avg. ˜700 m in diameter) dust devils are ˜26 min, as inferred from retracing. For one of these large dust devil (˜820 m in diameter) it was possible to calculate a minimum lifetime of ˜74 min based on the measured horizontal speed and the length of its associated dust devil track. The comparison of our minimum lifetimes with previous published results of minimum and average lifetimes of small (˜19 m in diameter, avg. min. lifetime of ˜2.83 min) and medium (˜185 m in diameter, avg. min. lifetime of ˜13 min) dust devils imply that larger dust devils on Mars are active for much longer periods of time than smaller ones, as it is the case for terrestrial dust devils. Knowledge of martian dust devil lifetimes is an important parameter for the calculation of dust lifting rates. Estimates of the contribution of large dust devils (>300-1000 m in diameter) indicate that they may contribute, at least regionally, to ˜50% of dust entrainment by dust devils into the atmosphere compared to the dust devils <300 m in diameter given that the size-frequency distribution follows a power-law. Although large dust devils occur relatively rarely and the sediment fluxes are probably lower compared to smaller dust devils, their contribution to the background dust opacity by dust devils on Mars could be at least regionally large due to their longer lifetimes and ability of dust lifting into high atmospheric layers.

  4. Compact dewar and electronics for large-format infrared detectors

    NASA Astrophysics Data System (ADS)

    Manissadjian, A.; Magli, S.; Mallet, E.; Cassaigne, P.

    2011-06-01

    Infrared systems cameras trend is to require higher performance (thanks to higher resolution) and in parallel higher compactness for easier integration in systems. The latest developments at SOFRADIR / France on HgCdTe (Mercury Cadmium Telluride / MCT) cooled IR staring detectors do show constant improvements regarding detector performances and compactness, by reducing the pixel pitch and optimizing their encapsulation. Among the latest introduced detectors, the 15μm pixel pitch JUPITER HD-TV format (1280×1024) has to deal with challenging specifications regarding dewar compactness, low power consumption and reliability. Initially introduced four years ago in a large dewar with a more than 2kg split Stirling cooler compressor, it is now available in a new versatile compact dewar that is vacuum-maintenance-free over typical 18 years mission profiles, and that can be integrated with the different available Stirling coolers: K548 microcooler for light solution (less than 0.7 kg), K549 or LSF9548 for split cooler and/or higher reliability solution. The IDDCAs are also required with simplified electrical interface enabling to shorten the system development time and to standardize the electronic boards definition with smaller volumes. Sofradir is therefore introducing MEGALINK, the new compact Command & Control Electronics compatible with most of the Sofradir IDDCAs. MEGALINK provides all necessary input biases and clocks to the FPAs, and digitizes and multiplexes the video outputs to provide a 14 bit output signal through a cameralink interface, in a surface smaller than a business card.

  5. Hubble Spots a Secluded Starburst Galaxy

    NASA Image and Video Library

    2017-12-08

    This image was taken by the NASA/ESA Hubble Space Telescope’s Advanced Camera for Surveys (ACS) and shows a starburst galaxy named MCG+07-33-027. This galaxy lies some 300 million light-years away from us, and is currently experiencing an extraordinarily high rate of star formation — a starburst. Normal galaxies produce only a couple of new stars per year, but starburst galaxies can produce a hundred times more than that. As MCG+07-33-027 is seen face-on, the galaxy’s spiral arms and the bright star-forming regions within them are clearly visible and easy for astronomers to study. In order to form newborn stars, the parent galaxy has to hold a large reservoir of gas, which is slowly depleted to spawn stars over time. For galaxies in a state of starburst, this intense period of star formation has to be triggered somehow — often this happens due to a collision with another galaxy. MCG+07-33-027, however, is special; while many galaxies are located within a large cluster of galaxies, MCG+07-33-027 is a field galaxy, which means it is rather isolated. Thus, the triggering of the starburst was most likely not due to a collision with a neighboring or passing galaxy and astronomers are still speculating about the cause. The bright object to the right of the galaxy is a foreground star in our own galaxy. Image credit: ESA/Hubble & NASA and N. Grogin (STScI)

  6. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    PubMed Central

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  7. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-07-29

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).

  8. Optical design of the SuMIRe/PFS spectrograph

    NASA Astrophysics Data System (ADS)

    Pascal, Sandrine; Vives, Sébastien; Barkhouser, Robert; Gunn, James E.

    2014-07-01

    The SuMIRe Prime Focus Spectrograph (PFS), developed for the 8-m class SUBARU telescope, will consist of four identical spectrographs, each receiving 600 fibers from a 2394 fiber robotic positioner at the telescope prime focus. Each spectrograph includes three spectral channels to cover the wavelength range [0.38-1.26] um with a resolving power ranging between 2000 and 4000. A medium resolution mode is also implemented to reach a resolving power of 5000 at 0.8 um. Each spectrograph is made of 4 optical units: the entrance unit which produces three corrected collimated beams and three camera units (one per spectral channel: "blue, "red", and "NIR"). The beam is split by using two large dichroics; and in each arm, the light is dispersed by large VPH gratings (about 280x280mm). The proposed optical design was optimized to achieve the requested image quality while simplifying the manufacturing of the whole optical system. The camera design consists in an innovative Schmidt camera observing a large field-of-view (10 degrees) with a very fast beam (F/1.09). To achieve such a performance, the classical spherical mirror is replaced by a catadioptric mirror (i.e meniscus lens with a reflective surface on the rear side of the glass, like a Mangin mirror). This article focuses on the optical architecture of the PFS spectrograph and the perfornance achieved. We will first described the global optical design of the spectrograph. Then, we will focus on the Mangin-Schmidt camera design. The analysis of the optical performance and the results obtained are presented in the last section.

  9. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

  10. Family Of Calibrated Stereometric Cameras For Direct Intraoral Use

    NASA Astrophysics Data System (ADS)

    Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon

    1983-07-01

    In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.

  11. Concepts, laboratory, and telescope test results of the plenoptic camera as a wavefront sensor

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, L. F.; Montilla, I.; Fernández-Valdivia, J. J.; Trujillo-Sevilla, J. L.; Rodríguez-Ramos, J. M.

    2012-07-01

    The plenoptic camera has been proposed as an alternative wavefront sensor adequate for extended objects within the context of the design of the European Solar Telescope (EST), but it can also be used with point sources. Originated in the field of the Electronic Photography, the plenoptic camera directly samples the Light Field function, which is the four - dimensional representation of all the light entering a camera. Image formation can then be seen as the result of the photography operator applied to this function, and many other features of the light field can be exploited to extract information of the scene, like depths computation to extract 3D imaging or, as it will be specifically addressed in this paper, wavefront sensing. The underlying concept of the plenoptic camera can be adapted to the case of a telescope by using a lenslet array of the same f-number placed at the focal plane, thus obtaining at the detector a set of pupil images corresponding to every sampled point of view. This approach will generate a generalization of Shack-Hartmann, Curvature and Pyramid wavefront sensors in the sense that all those could be considered particular cases of the plenoptic wavefront sensor, because the information needed as the starting point for those sensors can be derived from the plenoptic image. Laboratory results obtained with extended objects, phase plates and commercial interferometers, and even telescope observations using stars and the Moon as an extended object are presented in the paper, clearly showing the capability of the plenoptic camera to behave as a wavefront sensor.

  12. New camera-based microswitch technology to monitor small head and mouth responses of children with multiple disabilities.

    PubMed

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred

    2014-06-01

    Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.

  13. The TolTEC Camera for the LMT Telescope

    NASA Astrophysics Data System (ADS)

    Bryan, Sean

    2018-01-01

    TolTEC is a new camera being built for the 50-meter Large Millimeter-wave Telescope (LMT) on Sierra Negra in Puebla, Mexico. The instrument will discover and characterize distant galaxies by detecting the thermal emission of dust heated by starlight. The polarimetric capabilities of the camera will measure magnetic fields in star-forming regions in the Milky Way. The optical design of the camera uses mirrors, lenses, and dichroics to simultaneously couple a 4 arcminute diameter field of view onto three single-band focal planes at 150, 220, and 280 GHz. The 7000 polarization-selective detectors are single-band horn-coupled LEKID detectors fabricated at NIST. A rotating half wave plate operates at ambient temperature to modulate the polarized signal. In addition to the galactic and extragalactic surveys already planned, TolTEC installed at the LMT will provide open observing time to the community.

  14. A Starburst in the Core of a Galaxy Cluster: the Dwarf Irregular NGC 1427A in Fornax

    NASA Astrophysics Data System (ADS)

    Mora, Marcelo D.; Chanamé, Julio; Puzia, Thomas H.

    2015-09-01

    Gas-rich galaxies in dense environments such as galaxy clusters and massive groups are affected by a number of possible types of interactions with the cluster environment, which make their evolution radically different than that of field galaxies. The dwarf irregular galaxy NGC 1427A, presently infalling toward the core of the Fornax galaxy cluster for the first time, offers a unique opportunity to study those processes at a level of detail not possible to achieve for galaxies at higher redshifts, when galaxy-scale interactions were more common. Using the spatial resolution of the Hubble Space Telescope/Advanced Camera for Surveys and auxiliary Very Large Telescope/FORS1 ground-based observations, we study the properties of the most recent episodes of star formation in this gas-rich galaxy, the only one of its type near the core of the Fornax cluster. We study the structural and photometric properties of young star cluster complexes in NGC 1427A, identifying 12 bright such complexes with exceptionally blue colors. The comparison of our broadband near-UV/optical photometry with simple stellar population models yields ages below ˜ 4× {10}6 years and stellar masses from a few 1000 up to ˜ 3× {10}4{M}⊙ , slightly dependent on the assumption of cluster metallicity and initial mass function. Their grouping is consistent with hierarchical and fractal star cluster formation. We use deep Hα imaging data to determine the current star formation rate in NGC 1427A and estimate the ratio, Γ, of star formation occurring in these star cluster complexes to that in the entire galaxy. We find Γ to be among the largest such values available in the literature, consistent with starburst galaxies. Thus a large fraction of the current star formation in NGC 1427A is occurring in star clusters, with the peculiar spatial arrangement of such complexes strongly hinting at the possibility that the starburst is being triggered by the passage of the galaxy through the cluster environment. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 70.B-0695.

  15. A computational approach to real-time image processing for serial time-encoded amplified microscopy

    NASA Astrophysics Data System (ADS)

    Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi

    2016-03-01

    High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.

  16. Ecological Relationships of Meso-Scale Distribution in 25 Neotropical Vertebrate Species

    PubMed Central

    Michalski, Lincoln José; Norris, Darren; de Oliveira, Tadeu Gomes; Michalski, Fernanda

    2015-01-01

    Vertebrates are a vital ecological component of Amazon forest biodiversity. Although vertebrates are a functionally important part of various ecosystem services they continue to be threatened by anthropogenic impacts throughout the Amazon. Here we use a standardized, regularly spaced arrangement of camera traps within 25km2 to provide a baseline assessment of vertebrate species diversity in a sustainable use protected area in the eastern Brazilian Amazon. We examined seasonal differences in the per species encounter rates (number of photos per camera trap and number of cameras with photos). Generalized linear models (GLMs) were then used to examine the influence of five variables (altitude, canopy cover, basal area, distance to nearest river and distance to nearest large river) on the number of photos per species and on functional groups. GLMs were also used to examine the relationships between large predators [Jaguar (Panthera onca) and Puma (Puma concolor)] and their prey. A total of 649 independent photos of 25 species were obtained from 1,800 camera trap days (900 each during wet and dry seasons). Only ungulates and rodents showed significant seasonal differences in the number of photos per camera. The number of photos differed between seasons for only three species (Mazama americana, Dasyprocta leporina and Myoprocta acouchy) all of which were photographed more (3 to 10 fold increase) during the wet season. Mazama americana was the only species where a significant difference was found in occupancy, with more photos in more cameras during the wet season. For most groups and species variation in the number of photos per camera was only explained weakly by the GLMs (deviance explained ranging from 10.3 to 54.4%). Terrestrial birds (Crax alector, Psophia crepitans and Tinamus major) and rodents (Cuniculus paca, Dasyprocta leporina and M. acouchy) were the notable exceptions, with our GLMs significantly explaining variation in the distribution of all species (deviance explained ranging from 21.0 to 54.5%). The group and species GLMs showed some novel ecological information from this relatively pristine area. We found no association between large cats and their potential prey. We also found that rodent and bird species were more often recorded closer to streams. As hunters gain access via rivers this finding suggests that there is currently little anthropogenic impact on the species. Our findings provide a standardized baseline for comparison with other sites and with which planned management and extractive activities can be evaluated. PMID:25938582

  17. Ecological relationships of meso-scale distribution in 25 neotropical vertebrate species.

    PubMed

    Michalski, Lincoln José; Norris, Darren; de Oliveira, Tadeu Gomes; Michalski, Fernanda

    2015-01-01

    Vertebrates are a vital ecological component of Amazon forest biodiversity. Although vertebrates are a functionally important part of various ecosystem services they continue to be threatened by anthropogenic impacts throughout the Amazon. Here we use a standardized, regularly spaced arrangement of camera traps within 25km2 to provide a baseline assessment of vertebrate species diversity in a sustainable use protected area in the eastern Brazilian Amazon. We examined seasonal differences in the per species encounter rates (number of photos per camera trap and number of cameras with photos). Generalized linear models (GLMs) were then used to examine the influence of five variables (altitude, canopy cover, basal area, distance to nearest river and distance to nearest large river) on the number of photos per species and on functional groups. GLMs were also used to examine the relationships between large predators [Jaguar (Panthera onca) and Puma (Puma concolor)] and their prey. A total of 649 independent photos of 25 species were obtained from 1,800 camera trap days (900 each during wet and dry seasons). Only ungulates and rodents showed significant seasonal differences in the number of photos per camera. The number of photos differed between seasons for only three species (Mazama americana, Dasyprocta leporina and Myoprocta acouchy) all of which were photographed more (3 to 10 fold increase) during the wet season. Mazama americana was the only species where a significant difference was found in occupancy, with more photos in more cameras during the wet season. For most groups and species variation in the number of photos per camera was only explained weakly by the GLMs (deviance explained ranging from 10.3 to 54.4%). Terrestrial birds (Crax alector, Psophia crepitans and Tinamus major) and rodents (Cuniculus paca, Dasyprocta leporina and M. acouchy) were the notable exceptions, with our GLMs significantly explaining variation in the distribution of all species (deviance explained ranging from 21.0 to 54.5%). The group and species GLMs showed some novel ecological information from this relatively pristine area. We found no association between large cats and their potential prey. We also found that rodent and bird species were more often recorded closer to streams. As hunters gain access via rivers this finding suggests that there is currently little anthropogenic impact on the species. Our findings provide a standardized baseline for comparison with other sites and with which planned management and extractive activities can be evaluated.

  18. Makran Mountain Range, Indus River Valley, Pakistan, India

    NASA Image and Video Library

    1984-10-13

    41G-120-040 (5-13 Oct. 1984) --- Pakistan, featuring the city of Karachi, the Makran mountain range, the mouth of the Indus River and the North Arabian Sea were photographed with a medium format camera aboard the space shuttle Challenger during the 41-G mission. Photo credit: NASA

  19. 78 FR 22795 - EPAAR Clause for Printing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-17

    .... Therefore, EPA is not considering the use of any voluntary consensus standards. J. Executive Order 12898... devices for the purpose of producing camera copy, negatives, a plate or image to be used in the production... are considered ``printing.'' ``Microform'' is any product produced in a miniaturized image format, for...

  20. Get-in-the-Zone (GITZ) Transition Display Format for Changing Camera Views in Multi-UAV Operations

    DTIC Science & Technology

    2008-12-01

    the multi-UAV operator will witch between dynamic and static missions, each potentially involving very different scenario environments and task...another. Inspired by cinematography techniques to help audiences maintain spatial understanding of a scene across discrete film cuts, use of a

  1. Jig For Stereoscopic Photography

    NASA Technical Reports Server (NTRS)

    Nielsen, David J.

    1990-01-01

    Separations between views adjusted precisely for best results. Simple jig adjusted to set precisely, distance between right and left positions of camera used to make stereoscopic photographs. Camera slides in slot between extreme positions, where it takes stereoscopic pictures. Distance between extreme positions set reproducibly with micrometer. In view of trend toward very-large-scale integration of electronic circuits, training method and jig used to make training photographs useful to many companies to reduce cost of training manufacturing personnel.

  2. HandSight: Supporting Everyday Activities through Touch-Vision

    DTIC Science & Technology

    2015-10-01

    switches between IR and RGB o Large, low resolution, and fixed focal length > 1ft • Raspberry PI NoIR: https://www.raspberrypi.org/products/ pi -noir...camera/ o Raspberry Pi NoIR camera with external visible light filters o Good image quality, manually adjustable focal length, small, programmable 11...purpose and scope of the research. 2. KEYWORDS: Provide a brief list of keywords (limit to 20 words). 3. ACCOMPLISHMENTS: The PI is reminded that

  3. Matrix Determination of Reflectance of Hidden Object via Indirect Photography

    DTIC Science & Technology

    2012-03-01

    the hidden object. This thesis provides an alternative method of processing the camera images by modeling the system as a set of transport and...Distribution Function ( BRDF ). Figure 1. Indirect photography with camera field of view dictated by point of illumination. 3 1.3 Research Focus In an...would need to be modeled using radiometric principles. A large amount of the improvement in this process was due to the use of a blind

  4. A DirtI Application for LBT Commissioning Campaigns

    NASA Astrophysics Data System (ADS)

    Borelli, J. L.

    2009-09-01

    In order to characterize the Gregorian focal stations and test the performance achieved by the Large Binocular Telescope (LBT) adaptive optics system, two infrared test cameras were constructed within a joint project between INAF (Observatorio Astronomico di Bologna, Italy) and the Max Planck Institute for Astronomy (Germany). Is intended here to describe the functionality and successful results obtained with the Daemon for the Infrared Test Camera Interface (DirtI) during commissioning campaigns.

  5. Blowin' in the Wind: Both "Negative" and "Positive" Feedback in an Obscured High-z Quasar

    NASA Astrophysics Data System (ADS)

    Cresci, G.; Mainieri, V.; Brusa, M.; Marconi, A.; Perna, M.; Mannucci, F.; Piconcelli, E.; Maiolino, R.; Feruglio, C.; Fiore, F.; Bongiorno, A.; Lanzuisi, G.; Merloni, A.; Schramm, M.; Silverman, J. D.; Civano, F.

    2015-01-01

    Quasar feedback in the form of powerful outflows is invoked as a key mechanism to quench star formation in galaxies, preventing massive galaxies to overgrow and producing the red colors of ellipticals. On the other hand, some models are also requiring "positive" active galactic nucleus feedback, inducing star formation in the host galaxy through enhanced gas pressure in the interstellar medium. However, finding observational evidence of the effects of both types of feedback is still one of the main challenges of extragalactic astronomy, as few observations of energetic and extended radiatively driven winds are available. Here we present SINFONI near infrared integral field spectroscopy of XID2028, an obscured, radio-quiet z = 1.59 QSO detected in the XMM-COSMOS survey, in which we clearly resolve a fast (1500 km s-1) and extended (up to 13 kpc from the black hole) outflow in the [O III] lines emitting gas, whose large velocity and outflow rate are not sustainable by star formation only. The narrow component of Hα emission and the rest frame U-band flux from Hubble Space Telescope/Advanced Camera for Surveys imaging enable to map the current star formation in the host galaxy: both tracers independently show that the outflow position lies in the center of an empty cavity surrounded by star forming regions on its edge. The outflow is therefore removing the gas from the host galaxy ("negative feedback"), but also triggering star formation by outflow induced pressure at the edges ("positive feedback"). XID2028 represents the first example of a host galaxy showing both types of feedback simultaneously at work.

  6. HST WFC3 Early Release Science: Emission-Line Galaxies from IR Grism Observations

    NASA Technical Reports Server (NTRS)

    Straughn, A. N.; Kuntschner, H.; Kuemmel, M.; Walsh, J. R.; Cohen, S. H.; Gardner, J. P.; Windhorst, R. A.; O'Connell, R. W.; Pirzkal, N.; Meurer, G.; hide

    2010-01-01

    We present grism spectra of emission line galaxies (ELGs) from 0.6-1.6 microns from the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST). These new infrared grism data augment previous optical Advanced Camera for Surveys G800L (0.6-0.95 micron) grism data in GOODS South, extending the wavelength coverage well past the G800L red cutoff. The ERS grism field was observed at a depth of 2 orbits per grism, yielding spectra of hundreds of faint objects, a subset of which are presented here. ELGs are studied via the Ha, [O III ], and [OII] emission lines detected in the redshift ranges 0.2 less than or equal to z less than or equal to 1.6, 1.2 less than or equal to z less than or equal to 2.4 and 2.0 less than or equal to z less than or equal to 3.6 respectively in the G102 (0.8-1.1 microns; R approximately 210) and C141 (1.1-1.6 microns; R approximately 130) grisms. The higher spectral resolution afforded by the WFC3 grisms also reveals emission lines not detectable with the G800L grism (e.g., [S II] and [S III] lines). From these relatively shallow observations, line luminosities, star formation rates, and grism spectroscopic redshifts are determined for a total of 25 ELGs to M(sub AB)(F098M) approximately 25 mag. The faintest source in our sample with a strong but unidentified emission line--is MAB(F098M)=26.9 mag. We also detect the expected trend of lower specific star formation rates for the highest mass galaxies in the sample, indicative of downsizing and discovered previously from large surveys. These results demonstrate the remarkable efficiency and capability of the WFC3 NIR grisms for measuring galaxy properties to faint magnitudes.

  7. On the analysis of large data sets

    NASA Astrophysics Data System (ADS)

    Ruch, Gerald T., Jr.

    We present a set of tools and techniques for performing detailed comparisons between computational models with high dimensional parameter spaces and large sets of archival data. By combining a principal component analysis of a large grid of samples from the model with an artificial neural network, we create a powerful data visualization tool as well as a way to robustly recover physical parameters from a large set of experimental data. Our techniques are applied in the context of circumstellar disks, the likely sites of planetary formation. An analysis is performed applying the two layer approximation of Chiang et al. (2001) and Dullemond et al. (2001) to the archive created by the Spitzer Space Telescope Cores to Disks Legacy program. We find two populations of disk sources. The first population is characterized by the lack of a puffed up inner rim while the second population appears to contain an inner rim which casts a shadow across the disk. The first population also exhibits a trend of increasing spectral index while the second population exhibits a decreasing trend in the strength of the 20 mm silicate emission feature. We also present images of the giant molecular cloud W3 obtained with the Infrared Array Camera (IRAC) and the Multiband Imaging Photometer (MIPS) on board the Spitzer Space Telescope. The images encompass the star forming regions W3 Main, W3(OH), and a region that we refer to as the Central Cluster which encloses the emission nebula IC 1795. We present a star count analysis of the point sources detected in W3. The star count analysis shows that the stellar population of the Central Cluster, when compared to that in the background, contains an over density of sources. The Central Cluster also contains an excess of sources with colors consistent with Class II Young Stellar Objects (YSOs). A analysis of the color-color diagrams also reveals a large number of Class II YSOs in the Central Cluster. Our results suggest that an earlier epoch of star formation created the Central Cluster, created a cavity, and triggered the active star formation in the W3 Main and W3(OH) regions. We also detect a new outflow and its candidate exciting star.

  8. On the development of radiation tolerant surveillance camera from consumer-grade components

    NASA Astrophysics Data System (ADS)

    Klemen, Ambrožič; Luka, Snoj; Lars, Öhlin; Jan, Gunnarsson; Niklas, Barringer

    2017-09-01

    In this paper an overview on the process of designing a radiation tolerant surveillance camera from consumer grade components and commercially available particle shielding materials is given. This involves utilization of Monte-Carlo particle transport code MCNP6 and ENDF/B-VII.0 nuclear data libraries, as well as testing the physical electrical systems against γ radiation, utilizing JSI TRIGA mk. II fuel elements as a γ-ray sources. A new, aluminum, 20 cm × 20 cm × 30 cm irradiation facility with electrical power and signal wire guide-tube to the reactor platform, was designed and constructed and used for irradiation of large electronic and optical components assemblies with activated fuel elements. Electronic components to be used in the camera were tested against γ-radiation in an independent manner, to determine their radiation tolerance. Several camera designs were proposed and simulated using MCNP, to determine incident particle and dose attenuation factors. Data obtained from the measurements and MCNP simulations will be used to finalize the design of 3 surveillance camera models, with different radiation tolerances.

  9. Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna

    PubMed Central

    Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Simpson, Robert; Smith, Arfon; Packer, Craig

    2015-01-01

    Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125 km2 in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final ‘consensus’ dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research. PMID:26097743

  10. Optical Design of the LSST Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivier, S S; Seppala, L; Gilmore, K

    2008-07-16

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, modified Paul-Baker design, with an 8.4-meter primary mirror, a 3.4-m secondary, and a 5.0-m tertiary feeding a camera system that includes a set of broad-band filters and refractive corrector lenses to produce a flat focal plane with a field of view of 9.6 square degrees. Optical design of the camera lenses and filters is integrated with optical design of telescope mirrors to optimize performance, resulting in excellent image quality over the entire field from ultra-violet to near infra-red wavelengths. The LSST camera optics design consists of three refractive lenses withmore » clear aperture diameters of 1.55 m, 1.10 m and 0.69 m and six interchangeable, broad-band, filters with clear aperture diameters of 0.75 m. We describe the methodology for fabricating, coating, mounting and testing these lenses and filters, and we present the results of detailed tolerance analyses, demonstrating that the camera optics will perform to the specifications required to meet their performance goals.« less

  11. Lightcurve Studies of Trans-Neptunian Objects from the Outer Solar System Origins Survey using the Hyper Suprime-Camera

    NASA Astrophysics Data System (ADS)

    Alexandersen, Mike; Benecchi, Susan D.; Chen, Ying-Tung; Schwamb, Megan Elizabeth; Wang, Shiang-Yu; Lehner, Matthew; Gladman, Brett; Kavelaars, JJ; Petit, Jean-Marc; Bannister, Michele T.; Gwyn, Stephen; Volk, Kathryn

    2016-10-01

    Lightcurves can reveal information about the gravitational processes that have acted on small bodies since their formation and/or their gravitational history.At the extremes, lightcurves can provide constraints on the material properties and interior structure of individual objects.In large sets, lightcurves can possibly shed light on the source of small body populations that did not form in place (such as the dynamically excited trans-Neptunian Objects (TNOs)).We have used the sparsely sampled photometry from the well characterized Outer Solar System Origins Survey (OSSOS) discovery and recovery observations to identify TNOs with potentially large amplitude lightcurves.Large lightcurve amplitudes would indicate that the objects are likely elongated or in potentially interesting spin states; however, this would need to be confirmed with further follow-up observations.We here present the results of a 6-hour pilot study of a subset of 17 OSSOS objects using Hyper Suprime-Cam (HSC) on the Subaru Telescope.Subaru's large aperture and HSC's large field of view allows us to obtain measurements on multiple objects with a range of magnitudes in each telescope pointing.Photometry was carefully measusured using an elongated aperture method to account for the motion of the objects, producing the short but precise lightcurves that we present here.The OSSOS objects span a large range of sizes, from as large as several hundred kilometres to as small as a few tens of kilometres in diameter.We are thus investigating smaller objects than previous light-curve projects have typically studied.

  12. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  13. Using Unmanned Aerial Vehicles in Postfire Vegetation Survey Campaigns through Large and Heterogeneous Areas: Opportunities and Challenges.

    PubMed

    Fernández-Guisuraga, José Manuel; Sanz-Ablanedo, Enoc; Suárez-Seoane, Susana; Calvo, Leonor

    2018-02-14

    This study evaluated the opportunities and challenges of using drones to obtain multispectral orthomosaics at ultra-high resolution that could be useful for monitoring large and heterogeneous burned areas. We conducted a survey using an octocopter equipped with a Parrot SEQUOIA multispectral camera in a 3000 ha framework located within the perimeter of a megafire in Spain. We assessed the quality of both the camera raw imagery and the multispectral orthomosaic obtained, as well as the required processing capability. Additionally, we compared the spatial information provided by the drone orthomosaic at ultra-high spatial resolution with another image provided by the WorldView-2 satellite at high spatial resolution. The drone raw imagery presented some anomalies, such as horizontal banding noise and non-homogeneous radiometry. Camera locations showed a lack of synchrony of the single frequency GPS receiver. The georeferencing process based on ground control points achieved an error lower than 30 cm in X-Y and lower than 55 cm in Z. The drone orthomosaic provided more information in terms of spatial variability in heterogeneous burned areas in comparison with the WorldView-2 satellite imagery. The drone orthomosaic could constitute a viable alternative for the evaluation of post-fire vegetation regeneration in large and heterogeneous burned areas.

  14. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  15. Using Unmanned Aerial Vehicles in Postfire Vegetation Survey Campaigns through Large and Heterogeneous Areas: Opportunities and Challenges

    PubMed Central

    2018-01-01

    This study evaluated the opportunities and challenges of using drones to obtain multispectral orthomosaics at ultra-high resolution that could be useful for monitoring large and heterogeneous burned areas. We conducted a survey using an octocopter equipped with a Parrot SEQUOIA multispectral camera in a 3000 ha framework located within the perimeter of a megafire in Spain. We assessed the quality of both the camera raw imagery and the multispectral orthomosaic obtained, as well as the required processing capability. Additionally, we compared the spatial information provided by the drone orthomosaic at ultra-high spatial resolution with another image provided by the WorldView-2 satellite at high spatial resolution. The drone raw imagery presented some anomalies, such as horizontal banding noise and non-homogeneous radiometry. Camera locations showed a lack of synchrony of the single frequency GPS receiver. The georeferencing process based on ground control points achieved an error lower than 30 cm in X-Y and lower than 55 cm in Z. The drone orthomosaic provided more information in terms of spatial variability in heterogeneous burned areas in comparison with the WorldView-2 satellite imagery. The drone orthomosaic could constitute a viable alternative for the evaluation of post-fire vegetation regeneration in large and heterogeneous burned areas. PMID:29443914

  16. Telerobotic rendezvous and docking vision system architecture

    NASA Technical Reports Server (NTRS)

    Gravely, Ben; Myers, Donald; Moody, David

    1992-01-01

    This research program has successfully demonstrated a new target label architecture that allows a microcomputer to determine the position, orientation, and identity of an object. It contains a CAD-like database with specific geometric information about the object for approach, grasping, and docking maneuvers. Successful demonstrations were performed selecting and docking an ORU box with either of two ORU receptacles. Small, but significant differences were seen in the two camera types used in the program, and camera sensitive program elements have been identified. The software has been formatted into a new co-autonomy system which provides various levels of operator interaction and promises to allow effective application of telerobotic systems while code improvements are continuing.

  17. s48-e-007

    NASA Image and Video Library

    2013-01-15

    S48-E-007 (12 Sept 1991) --- Astronaut James F. Buchli, mission specialist, catches snack crackers as they float in the weightless environment of the earth-orbiting Discovery. This image was transmitted by the Electronic Still Camera, Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE

  18. Apollo 12 photography 70 mm, 16 mm, and 35 mm frame index

    NASA Technical Reports Server (NTRS)

    1970-01-01

    For each 70-mm frame, the index presents information on: (1) the focal length of the camera, (2) the photo scale at the principal point of the frame, (3) the selenographic coordinates at the principal point of the frame, (4) the percentage of forward overlap of the frame, (5) the sun angle (medium, low, high), (6) the quality of the photography, (7) the approximate tilt (minimum and maximum) of the camera, and (8) the direction of tilt. A brief description of each frame is also included. The index to the 16-mm sequence photography includes information concerning the approximate surface coverage of the photographic sequence and a brief description of the principal features shown. A column of remarks is included to indicate: (1) if the sequence is plotted on the photographic index map and (2) the quality of the photography. The pictures taken using the lunar surface closeup stereoscopic camera (35 mm) are also described in this same index format.

  19. GALEX 1st Light Near and Far Ultraviolet -100

    NASA Image and Video Library

    2003-05-28

    NASA's Galaxy Evolution Explorer took this image on May 21 and 22, 2003. The image was made from data gathered by the two channels of the spacecraft camera during the mission's "first light" milestone. It shows about 100 celestial objects in the constellation Hercules. The reddish objects represent those detected by the camera's near ultraviolet channel over a 5-minute period, while bluish objects were detected over a 3-minute period by the camera's far ultraviolet channel. The Galaxy Evolution Explorer's first light images are dedicated to the crew of the Space Shuttle Columbia. The Hercules region was directly above Columbia when it made its last contact with NASA Mission Control on February 1, over the skies of Texas. The Galaxy Evolution Explorer launched on April 28 on a mission to map the celestial sky in the ultraviolet and determine the history of star formation in the universe over the last 10 billion years. http://photojournal.jpl.nasa.gov/catalog/PIA04281

  20. Electronic Still Camera Project on STS-48

    NASA Technical Reports Server (NTRS)

    1991-01-01

    On behalf of NASA, the Office of Commercial Programs (OCP) has signed a Technical Exchange Agreement (TEA) with Autometric, Inc. (Autometric) of Alexandria, Virginia. The purpose of this agreement is to evaluate and analyze a high-resolution Electronic Still Camera (ESC) for potential commercial applications. During the mission, Autometric will provide unique photo analysis and hard-copy production. Once the mission is complete, Autometric will furnish NASA with an analysis of the ESC s capabilities. Electronic still photography is a developing technology providing the means by which a hand held camera electronically captures and produces a digital image with resolution approaching film quality. The digital image, stored on removable hard disks or small optical disks, can be converted to a format suitable for downlink transmission, or it can be enhanced using image processing software. The on-orbit ability to enhance or annotate high-resolution images and then downlink these images in real-time will greatly improve Space Shuttle and Space Station capabilities in Earth observations and on-board photo documentation.

Top