Science.gov

Sample records for airborne digital camera

  1. Design and implementation of digital airborne multispectral camera system

    NASA Astrophysics Data System (ADS)

    Lin, Zhaorong; Zhang, Xuguo; Wang, Li; Pan, Deai

    2012-10-01

    The multispectral imaging equipment is a kind of new generation remote sensor, which can obtain the target image and the spectra information simultaneously. A digital airborne multispectral camera system using discrete filter method had been designed and implemented for unmanned aerial vehicle (UAV) and manned aircraft platforms. The digital airborne multispectral camera system has the advantages of larger frame, higher resolution, panchromatic and multispectral imaging. It also has great potential applications in the fields of environmental and agricultural monitoring and target detection and discrimination. In order to enhance the measurement precision and accuracy of position and orientation, Inertial Measurement Unit (IMU) is integrated in the digital airborne multispectral camera. Meanwhile, the Temperature Control Unit (TCU) guarantees that the camera can operate in the normal state in different altitudes to avoid the window fogging and frosting which will degrade the imaging quality greatly. Finally, Flying experiments were conducted to demonstrate the functionality and performance of the digital airborne multispectral camera. The resolution capability, positioning accuracy and classification and recognition ability were validated.

  2. A simple method for vignette correction of airborne digital camera data

    SciTech Connect

    Nguyen, A.T.; Stow, D.A.; Hope, A.S.

    1996-11-01

    Airborne digital camera systems have gained popularity in recent years due to their flexibility, high geometric fidelity and spatial resolution, and fast data turn-around time. However, a common problem that plagues these types of framing systems is vignetting which causes falloff in image brightness away from principle nadir point. This paper presents a simple method for vignetting correction by utilizing laboratory images of a uniform illumination source. Multiple lab images are averaged and inverted to create digital correction templates which then are applied to actual airborne data. The vignette correction was effective in removing the systematic falloff in spectral values. We have shown that the vignette correction is a necessary part of the preprocessing of raw digital airborne remote sensing data. The consequences of not correcting for these effects are demonstrated in the context of monitoring of salt marsh habitat. 4 refs.

  3. Those Nifty Digital Cameras!

    ERIC Educational Resources Information Center

    Ekhaml, Leticia

    1996-01-01

    Describes digital photography--an electronic imaging technology that merges computer capabilities with traditional photography--and its uses in education. Discusses how a filmless camera works, types of filmless cameras, advantages and disadvantages, and educational applications of the consumer digital cameras. (AEF)

  4. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  5. Digital Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, Samuel D.; Yeates, Herbert D.

    1993-01-01

    Digital electronic still camera part of electronic recording, processing, tansmitting, and displaying system. Removable hard-disk drive in camera serves as digital electronic equivalent of photographic film. Images viewed, analyzed, or transmitted quickly. Camera takes images of nearly photographic quality and stores them in digital form. Portable, hand-held, battery-powered unit designed for scientific use. Camera used in conjunction with playback unit also serving as transmitting unit if images sent to remote station. Remote station equipped to store, process, and display images. Digital image data encoded with error-correcting code at playback/transmitting unit for error-free transmission to remote station.

  6. Digital camera simulation.

    PubMed

    Farrell, Joyce E; Catrysse, Peter B; Wandell, Brian A

    2012-02-01

    We describe a simulation of the complete image processing pipeline of a digital camera, beginning with a radiometric description of the scene captured by the camera and ending with a radiometric description of the image rendered on a display. We show that there is a good correspondence between measured and simulated sensor performance. Through the use of simulation, we can quantify the effects of individual digital camera components on system performance and image quality. This computational approach can be helpful for both camera design and image quality assessment.

  7. An airborne four-camera imaging system for agricultural applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  8. Airborne hyperspectral surface and cloud bi-directional reflectivity observations in the Arctic using a commercial, digital camera

    NASA Astrophysics Data System (ADS)

    Ehrlich, A.; Bierwirth, E.; Wendisch, M.; Herber, A.; Gayet, J.-F.

    2011-09-01

    Spectral radiance measurements by a digital single-lens reflex camera were used to derive the bi-directional reflectivity of clouds and different surfaces in the Arctic. The camera has been calibrated radiometrically and spectrally to provide accurate radiance measurements with high angular resolution. A comparison with spectral radiance measurements with the SMART-Albedometer showed an agreement within the uncertainties of both instruments. The bi-directional reflectivity in terms of the hemispherical directional reflectance factor HDRF was obtained for sea ice, ice free ocean and clouds. The sea ice, with an albedo of ρ = 0.96, showed an almost isotropic HDRF, while sun glint was observed for the ocean HDRF (ρ = 0.12). For the cloud observations with ρ = 0.62, the fog bow - a backscatter feature typically for scattering by liquid water droplets - was covered by the camera. For measurements above a heterogeneous stratocumulus clouds, the required number of images to obtain a mean HDRF which clearly exhibits the fog bow has been estimated with about 50 images (10 min flight time). A representation of the HDRF as function of the scattering angle only reduces the image number to about 10 (2 min flight time). The measured cloud and ocean HDRF have been compared to radiative transfer simulations. The ocean HDRF simulated with the observed surface wind speed of 9 m s-1 agreed best with the measurements. For the cloud HDRF, the best agreement was obtained by a broad and weak fog bow simulated with a cloud droplet effective radius of Reff = 4 μm. This value agrees with the particle sizes from in situ measurements and retrieved from the spectral radiance of the SMART-Albedometer.

  9. Trajectory association across multiple airborne cameras.

    PubMed

    Sheikh, Yaser Ajmal; Shah, Mubarak

    2008-02-01

    A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of associating objects across multiple airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple association hypotheses, without assuming any prior calibration information. Given our scene model, we propose a likelihood function for evaluating a hypothesized association between observations in multiple cameras that is geometrically motivated. Since multiple cameras exist, ensuring coherency in association is an essential requirement, e.g. that transitive closure is maintained between more than two cameras. To ensure such coherency we pose the problem of maximizing the likelihood function as a k-dimensional matching and use an approximation to find the optimal assignment of association. Using the proposed error function, canonical trajectories of each object and optimal estimates of inter-camera transformations (in a maximum likelihood sense) are computed. Finally, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible and that, under special conditions, trajectories interrupted due to occlusion or missing detections can be repaired. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models, and through simulation quantitative performance is also reported.

  10. Measuring Distances Using Digital Cameras

    ERIC Educational Resources Information Center

    Kendal, Dave

    2007-01-01

    This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…

  11. A high-resolution airborne four-camera imaging system for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  12. Digital laser scanning fundus camera.

    PubMed

    Plesch, A; Klingbeil, U; Bille, J

    1987-04-15

    Imaging and documentation of the human retina for clinical diagnostics are conventionally achieved by classical optical methods. We designed a digital laser scanning fundus camera. The optoelectronical instrument is based on scanning laser illumination of the retina and a modified video imaging procedure. It is coupled to a digital image buffer and a microcomputer for image storage and processing. Aside from its high sensitivity the LSF incorporates new ophthalmic imaging methods like polarization differential contrast. We give design considerations as well as a description of the instrument and its performance.

  13. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  14. Flash photography by digital still camera

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yoshitaka

    2001-04-01

    Recently, the number of commercially produced digital still cameras has increases rapidly. However, detailed performance of digital still camera had not been evaluated. One of the purposes of this paper is to devise the method of evaluating the performance of a new camera. Another purpose is to show possibility of taking a picture of a scientific high quality photograph with a camera on the market, and taking a picture of a high-speed phenomenon.

  15. Traffic monitoring with serial images from airborne cameras

    NASA Astrophysics Data System (ADS)

    Reinartz, Peter; Lachaise, Marie; Schmeer, Elisabeth; Krauss, Thomas; Runge, Hartmut

    The classical means to measure traffic density and velocity depend on local measurements from induction loops and other on site instruments. This information does not give the whole picture of the two-dimensional traffic situation. In order to obtain precise knowledge about the traffic flow of a large area, only airborne cameras or cameras positioned at very high locations (towers, etc.) can provide an up-to-date image of all roads covered. The paper aims at showing the potential of using image time series from these cameras to derive traffic parameters on the basis of single car measurements. To be able to determine precise velocities and other parameters from an image time series, exact geocoding is one of the first requirements for the acquired image data. The methods presented here for determining several traffic parameters for single vehicles and vehicle groups involve recording and evaluating a number of digital or analog aerial images from high altitude and with a large total field of view. Visual and automatic methods for the interpretation of images are compared. It turns out that the recording frequency of the individual images should be at least 1/3 Hz (visual interpretation), but is preferably 3 Hz or more, especially for automatic vehicle tracking. The accuracy and potentials of the methods are analyzed and presented, as well as the usage of a digital road database for improving the tracking algorithm and for integrating the results for further traffic applications. Shortcomings of the methods are given as well as possible improvements regarding methodology and sensor platform.

  16. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  17. High-performance digital color video camera

    NASA Astrophysics Data System (ADS)

    Parulski, Kenneth A.; D'Luna, Lionel J.; Benamati, Brian L.; Shelley, Paul R.

    1992-01-01

    Typical one-chip color cameras use analog video processing circuits. An improved digital camera architecture has been developed using a dual-slope A/D conversion technique and two full-custom CMOS digital video processing integrated circuits, the color filter array (CFA) processor and the RGB postprocessor. The system used a 768 X 484 active element interline transfer CCD with a new field-staggered 3G color filter pattern and a lenslet overlay, which doubles the sensitivity of the camera. The industrial-quality digital camera design offers improved image quality, reliability, manufacturability, while meeting aggressive size, power, and cost constraints. The CFA processor digital VLSI chip includes color filter interpolation processing, an optical black clamp, defect correction, white balance, and gain control. The RGB postprocessor digital integrated circuit includes a color correction matrix, gamma correction, 2D edge enhancement, and circuits to control the black balance, lens aperture, and focus.

  18. High-performance digital color video camera

    NASA Astrophysics Data System (ADS)

    Parulski, Kenneth A.; Benamati, Brian L.; D'Luna, Lionel J.; Shelley, Paul R.

    1991-06-01

    Typical one-chip color cameras use analog video processing circuits. An improved digital camera architecture has been developed using a dual-slope A/D conversion technique, and two full custom CMOS digital video processing ICs, the 'CFA processor' and the 'RGB post- processor.' The system uses a 768 X 484 active element interline transfer CCD with a new 'field-staggered 3G' color filter pattern and a 'lenslet' overlay, which doubles the sensitivity of the camera. The digital camera design offers improved image quality, reliability, and manufacturability, while meeting aggressive size, power, and cost constraints. The CFA processor digital VLSI chip includes color filter interpolation processing, an optical black clamp, defect correction, white balance, and gain control. The RGB post-processor digital IC includes a color correction matrix, gamma correction, two-dimensional edge-enhancement, and circuits to control the black balance, lens aperture, and focus.

  19. Television camera on RMS surveys insulation on Airborne Support Equipment

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The television camera on the end effector of the Canadian-built Remote Manipulator System (RMS) is seen surveying some of the insulation on the Airborne Support Equipment (ASE). Flight controllers called for the survey following the departure of the Advanced Communications Technology Satellite (ACTS) and its Transfer Orbit Stage (TOS).

  20. High Resolution Airborne Digital Imagery for Precision Agriculture

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley R.

    1998-01-01

    The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).

  1. Tips and Tricks for Digital Camera Users.

    ERIC Educational Resources Information Center

    Ekhaml, Leticia

    2002-01-01

    Discusses the use of digital cameras in school library media centers and offers suggestions for teachers and students in elementary schools. Describes appropriate image-editing software; explains how to create panoramas, screen savers, and coloring books; and includes useful tips for digital photographers. (LRW)

  2. Camera! Action! Collaborate with Digital Moviemaking

    ERIC Educational Resources Information Center

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  3. An Inexpensive Digital Infrared Camera

    ERIC Educational Resources Information Center

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  4. A stereoscopic lens for digital cinema cameras

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  5. Digital Camera Project Fosters Communication Skills

    ERIC Educational Resources Information Center

    Fisher, Ashley; Lazaros, Edward J.

    2009-01-01

    This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…

  6. The Sloan Digital Sky Survey Photometric Camera

    NASA Astrophysics Data System (ADS)

    Gunn, J. E.; Carr, M.; Rockosi, C.; Sekiguchi, M.; Berry, K.; Elms, B.; de Haas, E.; Ivezić, Ž .; Knapp, G.; Lupton, R.; Pauls, G.; Simcoe, R.; Hirsch, R.; Sanford, D.; Wang, S.; York, D.; Harris, F.; Annis, J.; Bartozek, L.; Boroski, W.; Bakken, J.; Haldeman, M.; Kent, S.; Holm, S.; Holmgren, D.; Petravick, D.; Prosapio, A.; Rechenmacher, R.; Doi, M.; Fukugita, M.; Shimasaku, K.; Okada, N.; Hull, C.; Siegmund, W.; Mannery, E.; Blouke, M.; Heidtman, D.; Schneider, D.; Lucinio, R.; Brinkman, J.

    1998-12-01

    We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 x 2048 SITe/Tektronix CCDs (24 μm pixels) with an effective imaging area of 720 cm^2 and an astrometric array that uses 24 400 x 2048 CCDs with the same pixel size, which will allow us to tie bright astrometric standard stars to the objects imaged in the photometric camera. The instrument will be used to carry out photometry essentially simultaneously in five color bands spanning the range accessible to silicon detectors on the ground in the time-delay-and-integrate (TDI) scanning mode. The photometric detectors are arrayed in the focal plane in six columns of five chips each such that two scans cover a filled stripe 2.5d wide. This paper presents engineering and technical details of the camera.

  7. National Guidelines for Digital Camera Systems Certification

    NASA Astrophysics Data System (ADS)

    Yaron, Yaron; Keinan, Eran; Benhamu, Moshe; Regev, Ronen; Zalmanzon, Garry

    2016-06-01

    Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU), active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc.) of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD) and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process). The study examine all the aspects of the final product including; its accuracy, the product pixels size

  8. Practical aspects of adjusting digital cameras.

    PubMed

    Nordberg, Joshua J; Sluder, Greenfield

    2013-01-01

    This chapter introduces the adjustment of digital camera settings using the tools found within image acquisition software and discusses measuring gray-level information such as (1) the histogram, (2) line scan, and (3) other strategies. The pixel values in an image can be measured within many image capture software programs in two ways. The first is a histogram of pixel gray values and the second is a line-scan plot across a selectable axis of the image. Understanding how to evaluate the information presented by these tools is critical to properly adjusting the camera to maximize the image contrast without losing grayscale information. This chapter discusses the 0-255 grayscale resolution of an 8-bit camera; however, the concepts are the same for cameras of any bit depth. This chapter also describes camera settings, such as exposure time, offset, and gain, and the steps for contrast stretching such as setting the exposure time, adjusting offset and gain, and camera versus image display controls.

  9. Digital Earth Watch: Investigating the World with Digital Cameras

    NASA Astrophysics Data System (ADS)

    Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.

    2015-12-01

    Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu

  10. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  11. Remote classification from an airborne camera using image super-resolution.

    PubMed

    Woods, Matthew; Katsaggelos, Aggelos

    2017-02-01

    The image processing technique known as super-resolution (SR), which attempts to increase the effective pixel sampling density of a digital imager, has gained rapid popularity over the last decade. The majority of literature focuses on its ability to provide results that are visually pleasing to a human observer. In this paper, we instead examine the ability of SR to improve the resolution-critical capability of an imaging system to perform a classification task from a remote location, specifically from an airborne camera. In order to focus the scope of the study, we address and quantify results for the narrow case of text classification. However, we expect the results generalize to a large set of related, remote classification tasks. We generate theoretical results through simulation, which are corroborated by experiments with a camera mounted on a DJI Phantom 3 quadcopter.

  12. The Sloan Digital Sky Survey Photometric Camera

    SciTech Connect

    Gunn, J.E.; Carr, M.; Rockosi, C.; Sekiguchi, M.; Berry, K.; Elms, B.; de Haas, E.; Ivezic, Z.; Knapp, G.; Lupton, R.; Pauls, G.; Simcoe, R.; Hirsch, R.; Sanford, D.; Wang, S.; York, D.; Harris, F.; Annis, J.; Bartozek, L.; Boroski, W.; Bakken, J.; Haldeman, M.; Kent, S.; Holm, S.; Holmgren, D.; Petravick, D.; Prosapio, A.; Rechenmacher, R.; Doi, M.; Fukugita, M.; Shimasaku, K.; Okada, N.; Hull, C.; Siegmund, W.; Mannery, E.; Blouke, M.; Heidtman, D.; Schneider, D.; Lucinio, R.; and others

    1998-12-01

    We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 {times} 2048 SITe/Tektronix CCDs (24 {mu}m pixels) with an effective imaging area of 720 cm{sup 2} and an astrometric array that uses 24 400 {times} 2048 CCDs with the same pixel size, which will allow us to tie bright astrometric standard stars to the objects imaged in the photometric camera. The instrument will be used to carry out photometry essentially simultaneously in five color bands spanning the range accessible to silicon detectors on the ground in the time-delay{endash}and{endash}integrate (TDI) scanning mode. The photometric detectors are arrayed in the focal plane in six columns of five chips each such that two scans cover a filled stripe 2&arcdeg;5 wide. This paper presents engineering and technical details of the camera. {copyright} {ital {copyright} 1998.} {ital The American Astronomical Society}

  13. Digital Camera Control for Faster Inspection

    NASA Technical Reports Server (NTRS)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  14. Process simulation in digital camera system

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  15. Digital logarithmic airborne gamma ray spectrometer

    NASA Astrophysics Data System (ADS)

    Zeng, Guo-Qiang; Zhang, Qing-Xian; Li, Chen; Tan, Cheng-Jun; Ge, Liang-Quan; Gu, Yi; Cheng, Feng

    2014-07-01

    A new digital logarithmic airborne gamma ray spectrometer is designed in this study. The spectrometer adopts a high-speed and high-accuracy logarithmic amplifier (LOG114) to amplify the pulse signal logarithmically and to improve the utilization of the ADC dynamic range because the low-energy pulse signal has a larger gain than the high-energy pulse signal. After energy calibration, the spectrometer can clearly distinguish photopeaks at 239, 352, 583 and 609 keV in the low-energy spectral sections. The photopeak energy resolution of 137Cs improves to 6.75% from the original 7.8%. Furthermore, the energy resolution of three photopeaks, namely, K, U, and Th, is maintained, and the overall stability of the energy spectrum is increased through potassium peak spectrum stabilization. Thus, it is possible to effectively measure energy from 20 keV to 10 MeV.

  16. Quality criterion for digital still camera

    NASA Astrophysics Data System (ADS)

    Bezryadin, Sergey

    2007-02-01

    The main quality requirements for a digital still camera are color capturing accuracy, low noise level, and quantum efficiency. Different consumers assign different priorities to the listed parameters, and camera designers need clearly formulated methods for their evaluation. While there are procedures providing noise level and quantum efficiency estimation, there are no effective means for color capturing accuracy estimation. Introduced in this paper criterion allows to fill this gap. Luther-Ives condition for correct color reproduction system became known in the beginning of the last century. However, since no detector system satisfies Luther-Ives condition, there are always stimuli that are distinctly different for an observer, but which detectors are unable to distinguish. To estimate conformity of a detector set with Luther-Ives condition and calculate a measure of discrepancy, an angle between detector sensor sensitivity and Cohen's Fundamental Color Space may be used. In this paper, the divergence angle is calculated for some typical CCD sensors and a demonstration provided on how this angle might be reduced with a corrective filter. In addition, it is shown that with a specific corrective filter Foveon sensors turn into a detector system with a good Luther-Ives condition compliance.

  17. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  18. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  19. A Simple Spectrophotometer Using Common Materials and a Digital Camera

    ERIC Educational Resources Information Center

    Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal

    2011-01-01

    A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…

  20. Characterizing Digital Camera Systems: A Prelude to Data Standards

    NASA Technical Reports Server (NTRS)

    Ryan, Robert

    2002-01-01

    This viewgraph presentation profiles: 1) Digital imaging systems; 2) Specifying a digital imagery product; and 3) Characterization of data acquisition systems. Advanced large array digital imaging systems are routinely being used. Digital imagery guidelines are being developed by ASPRS and ISPRS. Guidelines and standards are of little use without standardized characterization methods. Characterization of digital camera systems is important for supporting digital imagery guidelines. Specifications are characterized in the lab and/or the field. Laboratory characterization is critical for optimizing and defining performance. In-flight characterization is necessary for an end-to-end system test.

  1. Seeing elements by visible-light digital camera.

    PubMed

    Zhao, Wenyang; Sakurai, Kenji

    2017-03-31

    A visible-light digital camera is used for taking ordinary photos, but with new operational procedures it can measure the photon energy in the X-ray wavelength region and therefore see chemical elements. This report describes how one can observe X-rays by means of such an ordinary camera - The front cover of the camera is replaced by an opaque X-ray window to block visible light and to allow X-rays to pass; the camera takes many snap shots (called single-photon-counting mode) to record every photon event individually; an integrated-filtering method is newly proposed to correctly retrieve the energy of photons from raw camera images. Finally, the retrieved X-ray energy-dispersive spectra show fine energy resolution and great accuracy in energy calibration, and therefore the visible-light digital camera can be applied to routine X-ray fluorescence measurement to analyze the element composition in unknown samples. In addition, the visible-light digital camera is promising in that it could serve as a position sensitive X-ray energy detector. It may become able to measure the element map or chemical diffusion in a multi-element system if it is fabricated with external X-ray optic devices. Owing to the camera's low expense and fine pixel size, the present method will be widely applied to the analysis of chemical elements as well as imaging.

  2. Seeing elements by visible-light digital camera

    PubMed Central

    Zhao, Wenyang; Sakurai, Kenji

    2017-01-01

    A visible-light digital camera is used for taking ordinary photos, but with new operational procedures it can measure the photon energy in the X-ray wavelength region and therefore see chemical elements. This report describes how one can observe X-rays by means of such an ordinary camera - The front cover of the camera is replaced by an opaque X-ray window to block visible light and to allow X-rays to pass; the camera takes many snap shots (called single-photon-counting mode) to record every photon event individually; an integrated-filtering method is newly proposed to correctly retrieve the energy of photons from raw camera images. Finally, the retrieved X-ray energy-dispersive spectra show fine energy resolution and great accuracy in energy calibration, and therefore the visible-light digital camera can be applied to routine X-ray fluorescence measurement to analyze the element composition in unknown samples. In addition, the visible-light digital camera is promising in that it could serve as a position sensitive X-ray energy detector. It may become able to measure the element map or chemical diffusion in a multi-element system if it is fabricated with external X-ray optic devices. Owing to the camera’s low expense and fine pixel size, the present method will be widely applied to the analysis of chemical elements as well as imaging. PMID:28361916

  3. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  4. Reconstructing spectral reflectance from digital camera through samples selection

    NASA Astrophysics Data System (ADS)

    Cao, Bin; Liao, Ningfang; Yang, Wenming; Chen, Haobo

    2016-10-01

    Spectral reflectance provides the most fundamental information of objects and is recognized as the "fingerprint" of them, since reflectance is independent of illumination and viewing conditions. However, reconstructing high-dimensional spectral reflectance from relatively low-dimensional camera outputs is an illposed problem and most of methods requaired camera's spectral responsivity. We propose a method to reconstruct spectral reflectance from digital camera outputs without prior knowledge of camera's spectral responsivity. This method respectively averages reflectances of selected subset from main training samples by prescribing a limit to tolerable color difference between the training samples and the camera outputs. Different tolerable color differences of training samples were investigated with Munsell chips under D65 light source. Experimental results show that the proposed method outperforms classic PI method in terms of multiple evaluation criteria between the actual and the reconstructed reflectances. Besides, the reconstructed spectral reflectances are between 0-1, which make them have actual physical meanings and better than traditional methods.

  5. Review of up-to date digital cameras interfaces

    NASA Astrophysics Data System (ADS)

    Linkemann, Joachim

    2013-04-01

    Over the past 15 years, various interfaces on digital industrial cameras have been available on the market. This tutorial will give an overview of interfaces such as LVDS (RS644), Channel Link and Camera Link. In addition, other interfaces such as FireWire, Gigabit Ethernet, and now USB 3.0 have become more popular. Owing to their ease of use, these interfaces cover most of the market. Nevertheless, for certain applications and especially for higher bandwidths, Camera Link and CoaXPress are very useful. This tutorial will give a description of the advantages and disadvantages, comment on bandwidths, and provide recommendations on when to use which interface.

  6. Bringing the Digital Camera to the Physics Lab

    ERIC Educational Resources Information Center

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-01-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…

  7. Toward a digital camera to rival the human eye

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2011-07-01

    All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.

  8. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  9. Sensitivity Analysis of an Automated Calibration Routine for Airborne Cameras

    DTIC Science & Technology

    2013-03-01

    22 DTED Digital Terrain Elevation Data ...................................................................23 GNSS ...instant in time in which an image was captured. The SPAN featured a tight integration of a NovAtel GNSS receiver and the IMU. The SPAN provided...continuous navigation information, using an Inertial Navigation System (INS), to bridge short Global Navigational Satellite Systems ( GNSS ) outages

  10. Digital control of the Kuiper Airborne Observatory telescope

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann C.; Snyder, Philip K.

    1989-01-01

    The feasibility of using a digital controller to stabilize a telescope mounted in an airplane is investigated. The telescope is a 30 in. infrared telescope mounted aboard a NASA C-141 aircraft known as the Kuiper Airborne Observatory. Current efforts to refurbish the 14-year-old compensation system have led to considering a digital controller. A typical digital controller is modeled and added into the telescope system model. This model is simulated on a computer to generate the Bode plots and time responses which determine system stability and performance parameters. Important aspects of digital control system hardware are discussed. A summary of the findings shows that a digital control system would result in satisfactory telescope performance.

  11. Comparison of 10 digital SLR cameras for orthodontic photography.

    PubMed

    Bister, D; Mordarai, F; Aveling, R M

    2006-09-01

    Digital photography is now widely used to document orthodontic patients. High quality intra-oral photography depends on a satisfactory 'depth of field' focus and good illumination. Automatic 'through the lens' (TTL) metering is ideal to achieve both the above aims. Ten current digital single lens reflex (SLR) cameras were tested for use in intra- and extra-oral photography as used in orthodontics. The manufacturers' recommended macro-lens and macro-flash were used with each camera. Handling characteristics, colour-reproducibility, quality of the viewfinder and flash recharge time were investigated. No camera took acceptable images in factory default setting or 'automatic' mode: this mode was not present for some cameras (Nikon, Fujifilm); led to overexposure (Olympus) or poor depth of field (Canon, Konica-Minolta, Pentax), particularly for intra-oral views. Once adjusted, only Olympus cameras were able to take intra- and extra-oral photographs without the need to change settings, and were therefore the easiest to use. All other cameras needed adjustments of aperture (Canon, Konica-Minolta, Pentax), or aperture and flash (Fujifilm, Nikon), making the latter the most complex to use. However, all cameras produced high quality intra- and extra-oral images, once appropriately adjusted. The resolution of the images is more than satisfactory for all cameras. There were significant differences relating to the quality of colour reproduction, size and brightness of the viewfinders. The Nikon D100 and Fujifilm S 3 Pro consistently scored best for colour fidelity. Pentax and Konica-Minolta had the largest and brightest viewfinders.

  12. Digital Holographic Interferometry for Airborne Particle Characterization

    DTIC Science & Technology

    2015-03-19

    hologram and its extinction cross section, and a computational demonstration that holographic interferometry can resolve aerosol particle size ...holographic interferometry can resolve aerosol particle size evolution. (a) Papers published in peer-reviewed journals (N/A for none) Enter List of...Characterization of Atmospheric Aerosols workshop, Smolenice, Slovak Republic (2013). 7. Poster : Digital Holographic Imaging of Aerosol Particles In-Flight

  13. Aerotriangulation Supported by Camera Station Position Determined via Physical Integration of LIDAR and SLR Digital Camera

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Martins, M.; Centeno, J.; Hainosz, F.

    2011-09-01

    Nowadays lidar and photogrammetric surveys have been used together in many mapping procedures due to their complementary characteristics. Lidar survey is capable to promptly acquire reliable elevation information that is sometimes difficult via photogrammetric procedure. On the other hand, photogrammetric survey is easily able to get semantic information of the objects. Accessibility, availability, the increasing sensor size and quick image acquisition and processing are properties that have raised the use of SLR digital cameras in photogrammetry. Orthoimage generation is a powerful photogrammetric mapping procedure, where the advantages of the integration of lidar and image datasets are very well characterized. However, to perform this application both datasets must be within a common reference frame. In this paper, a procedure to have digital images positioned and oriented in the same lidar frame via a combination of direct and indirect georeferencing is studied. The SLR digital camera was physically connected with the lidar system to calculate the camera station's position in lidar frame. After that, the aerotriangulation supported by camera station's position is performed to get image's exterior orientation parameters (EOP).

  14. Use of the Digital Camera To Increase Student Interest and Learning in High School Biology.

    ERIC Educational Resources Information Center

    Tatar, Denise; Robinson, Mike

    2003-01-01

    Attempts to answer two research questions: (1) Does the use of a digital camera in laboratory activities increase student learning?; and (2) Does the use of digital cameras motivate students to take a greater interest in laboratory work? Results indicate that the digital camera did increase student learning of process skills in two biology…

  15. Airborne Digital Sensor System and GPS-aided inertial technology for direct geopositioning in rough terrain

    USGS Publications Warehouse

    Sanchez, Richard D.

    2004-01-01

    High-resolution airborne digital cameras with onboard data collection based on the Global Positioning System (GPS) and inertial navigation systems (INS) technology may offer a real-time means to gather accurate topographic map information by reducing ground control and eliminating aerial triangulation. Past evaluations of this integrated system over relatively flat terrain have proven successful. The author uses Emerge Digital Sensor System (DSS) combined with Applanix Corporation?s Position and Orientation Solutions for Direct Georeferencing to examine the positional mapping accuracy in rough terrain. The positional accuracy documented in this study did not meet large-scale mapping requirements owing to an apparent system mechanical failure. Nonetheless, the findings yield important information on a new approach for mapping in Antarctica and other remote or inaccessible areas of the world.

  16. Multispectral synthesis of daylight using a commercial digital CCD camera.

    PubMed

    Nieves, Juan L; Valero, Eva M; Nascimento, Sérgio M C; Hernández-Andrés, Javier; Romero, Javier

    2005-09-20

    Performance of multispectral devices in recovering spectral data has been intensively investigated in some applications, as in spectral characterization of art paintings, but has received little attention in the context of spectral characterization of natural illumination. This study investigated the quality of the spectral estimation of daylight-type illuminants using a commercial digital CCD camera and a set of broadband colored filters. Several recovery algorithms that did not need information about spectral sensitivities of the camera sensors nor eigenvectors to describe the spectra were tested. Tests were carried out both with virtual data, using simulated camera responses, and real data obtained from real measurements. It was found that it is possible to recover daylight spectra with high spectral and colorimetric accuracy with a reduced number of three to nine spectral bands.

  17. Observation of Planetary Motion Using a Digital Camera

    ERIC Educational Resources Information Center

    Meyn, Jan-Peter

    2008-01-01

    A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…

  18. Measurement of solar extinction in tower plants with digital cameras

    NASA Astrophysics Data System (ADS)

    Ballestrín, J.; Monterreal, R.; Carra, M. E.; Fernandez-Reche, J.; Barbero, J.; Marzo, A.

    2016-05-01

    Atmospheric extinction of solar radiation between the heliostat field and the receiver is accepted as a non-negligible source of energy loss in the increasingly large central receiver plants. However, the reality is that there is currently no reliable measurement method for this quantity and at present these plants are designed, built and operated without knowing this local parameter. Nowadays digital cameras are used in many scientific applications for their ability to convert available light into digital images. Its broad spectral range, high resolution and high signal to noise ratio, make them an interesting device in solar technology. In this work a method for atmospheric extinction measurement based on digital images is presented. The possibility of defining a measurement setup in circumstances similar to those of a tower plant increases the credibility of the method. This procedure is currently being implemented at Plataforma Solar de Almería.

  19. The systematic error in digital image correlation induced by self-heating of a digital camera

    NASA Astrophysics Data System (ADS)

    Ma, Shaopeng; Pang, Jiazhi; Ma, Qinwei

    2012-02-01

    The systematic strain measurement error in digital image correlation (DIC) induced by self-heating of digital CCD and CMOS cameras was extensively studied, and an experimental and data analysis procedure has been proposed and two parameters have been suggested to examine and evaluate this. Six digital cameras of four different types were tested to define the strain errors, and it was found that each camera needed between 1 and 2 h to reach a stable heat balance, with a measured temperature increase of around 10 °C. During the temperature increase, the virtual image expansion will cause a 70-230 µɛ strain error in the DIC measurement, which is large enough to be noticed in most DIC experiments and hence should be eliminated.

  20. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  1. Preparation of a Low-Cost Digital Camera System for Remote Sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Off-the-shelf consumer digital cameras are convenient and user-friendly. However, the use of these cameras in remote sensing is limited because convenient methods for concurrently determining visible and near-infrared (NIR) radiation have not been developed. Two Nikon COOLPIX 4300 digital cameras ...

  2. Simulating the functionality of a digital camera pipeline

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2013-10-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal, color processing, and rendering. A spectral image processing algorithm is used to simulate the radiometric properties of a digital camera. In the algorithm, we take into consideration the spectral image and the transmittances of the light source, lenses, filters, and the quantum efficiency of a complementary metal-oxide semiconductor (CMOS) image sensor. The optical part is characterized by a multiple convolution between the different point spread functions optical components such as the Cooke triplet, the aperture, the light fall off, and the optical part of the CMOS sensor. The electrical part consists of the Bayer sampling, interpolation, dynamic range, and analog to digital conversion. The reconstruction of the noisy blurred image is performed by blending different light exposed images in order to reduce the noise. Then, the image is filtered, deconvoluted, and sharpened to eliminate the noise and blur. Next, we have the color processing and rendering blocks interpolation, white balancing, color correction, conversion from XYZ color space to LAB color space, and, then, into the RGB color space, the color saturation and contrast.

  3. A large distributed digital camera system for accelerator beam diagnostics

    NASA Astrophysics Data System (ADS)

    Catani, L.; Cianchi, A.; Di Pirro, G.; Honkavaara, K.

    2005-07-01

    Optical diagnostics, providing images of accelerated particle beams using radiation emitted by particles impinging a radiator, typically a fluorescent screen, has been extensively used, especially on electron linacs, since the 1970's. Higher intensity beams available in the last decade allow extending the use of beam imaging techniques to perform precise measurements of important beam parameters such as emittance, energy, and energy spread using optical transition radiation (OTR). OTR-based diagnostics systems are extensively used on the superconducting TESLA Test Facility (TTF) linac driving the vacuum ultraviolet free electron laser (VUV-FEL) at the Deutsches Elektronen-Synchrotron facility. Up to 30 optical diagnostic stations have been installed at various positions along the 250-m-long linac, each equipped with a high-performance digital camera. This paper describes the new approach to the design of the hardware and software setups required by the complex topology of such a distributed camera system.

  4. Formal methods and digital systems validation for airborne systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1993-01-01

    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992.

  5. Imaging and radiometric performance simulation for a new high-performance dual-band airborne reconnaissance camera

    NASA Astrophysics Data System (ADS)

    Seong, Sehyun; Yu, Jinhee; Ryu, Dongok; Hong, Jinsuk; Yoon, Jee-Yeon; Kim, Sug-Whan; Lee, Jun-Ho; Shin, Myung-Jin

    2009-05-01

    In recent years, high performance visible and IR cameras have been used widely for tactical airborne reconnaissance. The process improvement for efficient discrimination and analysis of complex target information from active battlefields requires for simultaneous multi-band measurement from airborne platforms at various altitudes. We report a new dual band airborne camera designed for simultaneous registration of both visible and IR imagery from mid-altitude ranges. The camera design uses a common front end optical telescope of around 0.3m in entrance aperture and several relay optical sub-systems capable of delivering both high spatial resolution visible and IR images to the detectors. The camera design is benefited from the use of several optical channels packaged in a compact space and the associated freedom to choose between wide (~3 degrees) and narrow (~1 degree) field of view. In order to investigate both imaging and radiometric performances of the camera, we generated an array of target scenes with optical properties such as reflection, refraction, scattering, transmission and emission. We then combined the target scenes and the camera optical system into the integrated ray tracing simulation environment utilizing Monte Carlo computation technique. Taking realistic atmospheric radiative transfer characteristics into account, both imaging and radiometric performances were then investigated. The simulation results demonstrate successfully that the camera design satisfies NIIRS 7 detection criterion. The camera concept, details of performance simulation computation, the resulting performances are discussed together with future development plan.

  6. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  7. Practical use of digital cameras for microphotography through the operating microscope.

    PubMed

    Gurunluoglu, Raffi; Shafighi, Maziar; Ozer, Kagan; Piza-Katzer, Hildegunde

    2003-07-01

    A practical use of personal digital cameras for taking digital photographs in the microsurgical field through an operating microscope is described. This inexpensive and practical method for acquiring microscopic images at the desired magnification combines the advantages of the digital camera and the operating microscope.

  8. Influence of Digital Camera Errors on the Photogrammetric Image Processing

    NASA Astrophysics Data System (ADS)

    Sužiedelytė-Visockienė, Jūratė; Bručas, Domantas

    2009-01-01

    The paper deals with the calibration of digital camera Canon EOS 350D, often used for the photogrammetric 3D digitalisation and measurements of industrial and construction site objects. During the calibration data on the optical and electronic parameters, influencing the distortion of images, such as correction of the principal point, focal length of the objective, radial symmetrical and non-symmetrical distortions were obtained. The calibration was performed by means of the Tcc software implementing the polynomial of Chebichev and using a special test-field with the marks, coordinates of which are precisely known. The main task of the research - to determine how parameters of the camera calibration influence the processing of images, i. e. the creation of geometric model, the results of triangulation calculations and stereo-digitalisation. Two photogrammetric projects were created for this task. In first project the non-corrected and in the second the corrected ones, considering the optical errors of the camera obtained during the calibration, images were used. The results of analysis of the images processing is shown in the images and tables. The conclusions are given.

  9. Digital cameras with designs inspired by the arthropod eye.

    PubMed

    Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Xiao, Jianliang; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B; Huang, Yonggang; Rogers, John A

    2013-05-02

    In arthropods, evolution has created a remarkably sophisticated class of imaging systems, with a wide-angle field of view, low aberrations, high acuity to motion and an infinite depth of field. A challenge in building digital cameras with the hemispherical, compound apposition layouts of arthropod eyes is that essential design requirements cannot be met with existing planar sensor technologies or conventional optics. Here we present materials, mechanics and integration schemes that afford scalable pathways to working, arthropod-inspired cameras with nearly full hemispherical shapes (about 160 degrees). Their surfaces are densely populated by imaging elements (artificial ommatidia), which are comparable in number (180) to those of the eyes of fire ants (Solenopsis fugax) and bark beetles (Hylastes nigrinus). The devices combine elastomeric compound optical elements with deformable arrays of thin silicon photodetectors into integrated sheets that can be elastically transformed from the planar geometries in which they are fabricated to hemispherical shapes for integration into apposition cameras. Our imaging results and quantitative ray-tracing-based simulations illustrate key features of operation. These general strategies seem to be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).

  10. Temperature Mapping in Hydrogel Matrices Using Unmodified Digital Camera.

    PubMed

    Darwish, Ghinwa H; Fakih, Hassan H; Karam, Pierre

    2017-02-09

    We report a simple, generally applicable, and noninvasive fluorescent method for mapping thermal fluctuations in hydrogel matrices using an unmodified commercially available digital single-lens reflex camera (DSLR). The nanothermometer is based on the complexation of short conjugated polyelectrolytes, poly(phenylene ethynylene) carboxylate, with an amphiphilic polymer, polyvinylpyrrolidone, which is in turn trapped within the porous network of a gel matrix. Changes in the temperature lead to a fluorescent ratiometric response with a maximum relative sensitivity of 2.0% and 1.9% at 45.0 °C for 0.5% agarose and agar, respectively. The response was reversible with no observed hysteresis when samples were cycled between 20 and 40 °C. As a proof of concept, the change in fluorescent signal/color was captured using a digital camera. The images were then dissected into their red-green-blue (RGB) components using a Matlab routine. A linear correlation was observed between the hydrogel temperature and the green and blue intensity channels. The reported sensor has the potential to provide a wealth of information when thermal fluctuations mapped in soft gels matrices are correlated with chemical or physical processes.

  11. Payette National Forest aerial survey project using the Kodak digital color infrared camera

    NASA Astrophysics Data System (ADS)

    Greer, Jerry D.

    1997-11-01

    Staff of the Payette National Forest located in central Idaho used the Kodak Digital Infrared Camera to collect digital photographic images over a wide variety of selected areas. The objective of this aerial survey project is to collect airborne digital camera imagery and to evaluate it for potential use in forest assessment and management. The data collected from this remote sensing system is being compared with existing resource information and with personal knowledge of the areas surveyed. Resource specialists are evaluating the imagery to determine if it may be useful for; identifying cultural sites (pre-European settlement tribal villages and camps); recognizing ecosystem landscape pattern; mapping recreation areas; evaluating the South Fork Salmon River road reconstruction project; designing the Elk Summit Road; assessing the impact of sediment on anadramous fish in the South Fork Salmon River; assessing any contribution of sediment to the South Fork from the reconstructed road; determining post-wildfire stress development in conifer timber; in assessing the development of insect populations in areas initially determined to be within low intensity wildfire burn polygons; and to search for Idaho Ground Squirrel habitat. Project sites include approximately 60 linear miles of the South Fork of the Salmon River; a parallel road over about half that distance; 3 archaeological sites; two transects of about 6 miles each for landscape patterns; 3 recreation areas; 5 miles of the Payette River; 4 miles of the Elk Summit Road; a pair of transects 4.5 miles long for stress assessment in timber; a triplet of transects about 3 miles long for the assessment of the identification of species; and an area of about 640 acres to evaluate habitat for the endangered Idaho Ground Squirrel. Preliminary results indicate that the imagery is an economically viable way to collect site specific resource information that is of value in the management of a national forest.

  12. Spatial statistical analysis of tree deaths using airborne digital imagery

    NASA Astrophysics Data System (ADS)

    Chang, Ya-Mei; Baddeley, Adrian; Wallace, Jeremy; Canci, Michael

    2013-04-01

    High resolution digital airborne imagery offers unprecedented opportunities for observation and monitoring of vegetation, providing the potential to identify, locate and track individual vegetation objects over time. Analytical tools are required to quantify relevant information. In this paper, locations of trees over a large area of native woodland vegetation were identified using morphological image analysis techniques. Methods of spatial point process statistics were then applied to estimate the spatially-varying tree death risk, and to show that it is significantly non-uniform. [Tree deaths over the area were detected in our previous work (Wallace et al., 2008).] The study area is a major source of ground water for the city of Perth, and the work was motivated by the need to understand and quantify vegetation changes in the context of water extraction and drying climate. The influence of hydrological variables on tree death risk was investigated using spatial statistics (graphical exploratory methods, spatial point pattern modelling and diagnostics).

  13. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  14. Camera system resolution and its influence on digital image correlation

    SciTech Connect

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; Fleming, Darryn

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss of spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.

  15. Camera system resolution and its influence on digital image correlation

    DOE PAGES

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; ...

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore » spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less

  16. DR with a DSLR: Digital Radiography with a Digital Single-Lens Reflex camera.

    PubMed

    Fan, Helen; Durko, Heather L; Moore, Stephen K; Moore, Jared; Miller, Brian W; Furenlid, Lars R; Pradhan, Sunil; Barrett, Harrison H

    2010-02-15

    An inexpensive, portable digital radiography (DR) detector system for use in remote regions has been built and evaluated. The system utilizes a large-format digital single-lens reflex (DSLR) camera to capture the image from a standard fluorescent screen. The large sensor area allows relatively small demagnification factors and hence minimizes the light loss. The system has been used for initial phantom tests in urban hospitals and Himalayan clinics in Nepal, and it has been evaluated in the laboratory at the University of Arizona by additional phantom studies. Typical phantom images are presented in this paper, and a simplified discussion of the detective quantum efficiency of the detector is given.

  17. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  18. 77 FR 43858 - Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-26

    ... COMMISSION Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and... States after importation of certain mobile telephones and wireless communication devices featuring digital cameras, and components thereof, that infringe certain claims of U.S. Patent No. 6,292,218...

  19. Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.

    ERIC Educational Resources Information Center

    Mills, David A.; Kelley, Kevin; Jones, Michael

    2001-01-01

    Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)

  20. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    NASA Astrophysics Data System (ADS)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

  1. Airborne Camera System for Real-Time Applications - Support of a National Civil Protection Exercise

    NASA Astrophysics Data System (ADS)

    Gstaiger, V.; Romer, H.; Rosenbaum, D.; Henkel, F.

    2015-04-01

    In the VABENE++ project of the German Aerospace Center (DLR), powerful tools are being developed to aid public authorities and organizations with security responsibilities as well as traffic authorities when dealing with disasters and large public events. One focus lies on the acquisition of high resolution aerial imagery, its fully automatic processing, analysis and near real-time provision to decision makers in emergency situations. For this purpose a camera system was developed to be operated from a helicopter with light-weight processing units and microwave link for fast data transfer. In order to meet end-users' requirements DLR works close together with the German Federal Office of Civil Protection and Disaster Assistance (BBK) within this project. One task of BBK is to establish, maintain and train the German Medical Task Force (MTF), which gets deployed nationwide in case of large-scale disasters. In October 2014, several units of the MTF were deployed for the first time in the framework of a national civil protection exercise in Brandenburg. The VABENE++ team joined the exercise and provided near real-time aerial imagery, videos and derived traffic information to support the direction of the MTF and to identify needs for further improvements and developments. In this contribution the authors introduce the new airborne camera system together with its near real-time processing components and share experiences gained during the national civil protection exercise.

  2. Quantification of gully volume using very high resolution DSM generated through 3D reconstruction from airborne and field digital imagery

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Zarco-Tejada, Pablo; Laredo, Mario; Gómez, Jose Alfonso

    2013-04-01

    Major advances have been made recently in automatic 3D photo-reconstruction techniques using uncalibrated and non-metric cameras (James and Robson, 2012). However, its application on soil conservation studies and landscape feature identification is currently at the outset. The aim of this work is to compare the performance of a remote sensing technique using a digital camera mounted on an airborne platform, with 3D photo-reconstruction, a method already validated for gully erosion assessment purposes (Castillo et al., 2012). A field survey was conducted in November 2012 in a 250 m-long gully located in field crops on a Vertisol in Cordoba (Spain). The airborne campaign was conducted with a 4000x3000 digital camera installed onboard an aircraft flying at 300 m above ground level to acquire 6 cm resolution imagery. A total of 990 images were acquired over the area ensuring a large overlap in the across- and along-track direction of the aircraft. An ortho-mosaic and the digital surface model (DSM) were obtained through automatic aerial triangulation and camera calibration methods. For the field-level photo-reconstruction technique, the gully was divided in several reaches to allow appropriate reconstruction (about 150 pictures taken per reach) and, finally, the resulting point clouds were merged into a unique mesh. A centimetric-accuracy GPS provided a benchmark dataset for gully perimeter and distinguishable reference points in order to allow the assessment of measurement errors of the airborne technique and the georeferenciation of the photo-reconstruction 3D model. The uncertainty on the gully limits definition was explicitly addressed by comparison of several criteria obtained by 3D models (slope and second derivative) with the outer perimeter obtained by the GPS operator identifying visually the change in slope at the top of the gully walls. In this study we discussed the magnitude of planimetric and altimetric errors and the differences observed between the

  3. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  4. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  5. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    NASA Astrophysics Data System (ADS)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  6. Accuracy assessment of airborne photogrammetrically derived high-resolution digital elevation models in a high mountain environment

    NASA Astrophysics Data System (ADS)

    Müller, Johann; Gärtner-Roer, Isabelle; Thee, Patrick; Ginzler, Christian

    2014-12-01

    High-resolution digital elevation models (DEMs) generated by airborne remote sensing are frequently used to analyze landform structures (monotemporal) and geomorphological processes (multitemporal) in remote areas or areas of extreme terrain. In order to assess and quantify such structures and processes it is necessary to know the absolute accuracy of the available DEMs. This study assesses the absolute vertical accuracy of DEMs generated by the High Resolution Stereo Camera-Airborne (HRSC-A), the Leica Airborne Digital Sensors 40/80 (ADS40 and ADS80) and the analogue camera system RC30. The study area is located in the Turtmann valley, Valais, Switzerland, a glacially and periglacially formed hanging valley stretching from 2400 m to 3300 m a.s.l. The photogrammetrically derived DEMs are evaluated against geodetic field measurements and an airborne laser scan (ALS). Traditional and robust global and local accuracy measurements are used to describe the vertical quality of the DEMs, which show a non Gaussian distribution of errors. The results show that all four sensor systems produce DEMs with similar accuracy despite their different setups and generations. The ADS40 and ADS80 (both with a ground sampling distance of 0.50 m) generate the most accurate DEMs in complex high mountain areas with a RMSE of 0.8 m and NMAD of 0.6 m They also show the highest accuracy relating to flying height (0.14‰). The pushbroom scanning system HRSC-A produces a RMSE of 1.03 m and a NMAD of 0.83 m (0.21‰ accuracy of the flying height and 10 times the ground sampling distance). The analogue camera system RC30 produces DEMs with a vertical accuracy of 1.30 m RMSE and 0.83 m NMAD (0.17‰ accuracy of the flying height and two times the ground sampling distance). It is also shown that the performance of the DEMs strongly depends on the inclination of the terrain. The RMSE of areas up to an inclination <40° is better than 1 m. In more inclined areas the error and outlier occurrence

  7. Aerosol retrieval from twilight photographs taken by a digital camera

    NASA Astrophysics Data System (ADS)

    Saito, M.; Iwabuchi, H.

    2014-12-01

    Twilight sky, one of the most beautiful sights seen in our daily life, varies day by day, because atmospheric components such as ozone and aerosols also varies day by day. Recent studies have revealed the effects of tropospheric aerosols on twilight sky. In this study, we develop a new algorithm for aerosol retrievals from twilight photographs taken by a digital single reflex-lens camera in solar zenith angle of 90-96˚ with interval of 1˚. A radiative transfer model taking spherical-shell atmosphere, multiple scattering and refraction into account is used as a forward model, and the optimal estimation is used as an inversion calculation to infer the aerosol optical and radiative properties. The sensitivity tests show that tropospheric (stratospheric) aerosol optical thickness is responsible to the distribution of twilight sky color and brightness near the horizon (in viewing angles of 10˚ to 20˚) and aerosol size distribution is responsible to the angular distribution of brightness near the solar direction. The AOTs are inferred with small uncertainties and agree very well with that from the Skyradiometer. In this conference, several case studies using the algorithm will be shown.

  8. The role of camera-bundled image management software in the consumer digital imaging value chain

    NASA Astrophysics Data System (ADS)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  9. Side oblique real-time orthophotography with the 9Kx9K digital framing camera

    NASA Astrophysics Data System (ADS)

    Gorin, Brian A.

    2003-08-01

    BAE SYSTEMS has reported on a new framing camera incorporating an ultra high resolution CCD detector array comprised of 9,216 x 9,216 pixels fabricated on one silicon wafer. The detector array features a 1:2 frame-per-second readout capable of stereo imagery with Nyquist resolution of 57 lp/mm from high velocity, low altitude (V/H) airborne platforms. Flight tests demonstrated the capability of the focal plane electronics for differential image motion compensation (IMC) with Nyquist performance utilizing a focal plane shutter (FPS) to enable both nadir and significant side and forward oblique imaging angles. The impact of FPS for differential image motion compensation is evaluated with the exterior orientation calibration parameters, which include the existing shutter velocity and flight dynamics from sample mapping applications. System requirements for GPS/INS are included with the effect of vertical error and side oblique angle impact of the digital elevation map (DEM) required to create the orthophoto. Results from the differentiated "collinearity equations" which relate the image coordinates to elements of interior and exterior orientation are combined with the DEM impact to provide useful guidelines for side oblique applications. The application of real-time orthophotography is described with the implications for system requirements for side oblique orthophoto capability.

  10. Real-time object tracking for moving target auto-focus in digital camera

    NASA Astrophysics Data System (ADS)

    Guan, Haike; Niinami, Norikatsu; Liu, Tong

    2015-02-01

    Focusing at a moving object accurately is difficult and important to take photo of the target successfully in a digital camera. Because the object often moves randomly and changes its shape frequently, position and distance of the target should be estimated at real-time so as to focus at the objet precisely. We propose a new method of real-time object tracking to do auto-focus for moving target in digital camera. Video stream in the camera is used for the moving target tracking. Particle filter is used to deal with problem of the target object's random movement and shape change. Color and edge features are used as measurement of the object's states. Parallel processing algorithm is developed to realize real-time particle filter object tracking easily in hardware environment of the digital camera. Movement prediction algorithm is also proposed to remove focus error caused by difference between tracking result and target object's real position when the photo is taken. Simulation and experiment results in digital camera demonstrate effectiveness of the proposed method. We embedded real-time object tracking algorithm in the digital camera. Position and distance of the moving target is obtained accurately by object tracking from the video stream. SIMD processor is applied to enforce parallel real-time processing. Processing time less than 60ms for each frame is obtained in the digital camera with its CPU of only 162MHz.

  11. DR with a DSLR: Digital Radiography with a Digital Single-Lens Reflex camera

    PubMed Central

    Fan, Helen; Durko, Heather L.; Moore, Stephen K.; Moore, Jared; Miller, Brian W.; Furenlid, Lars R.; Pradhan, Sunil; Barrett, Harrison H.

    2010-01-01

    An inexpensive, portable digital radiography (DR) detector system for use in remote regions has been built and evaluated. The system utilizes a large-format digital single-lens reflex (DSLR) camera to capture the image from a standard fluorescent screen. The large sensor area allows relatively small demagnification factors and hence minimizes the light loss. The system has been used for initial phantom tests in urban hospitals and Himalayan clinics in Nepal, and it has been evaluated in the laboratory at the University of Arizona by additional phantom studies. Typical phantom images are presented in this paper, and a simplified discussion of the detective quantum efficiency of the detector is given. PMID:21516238

  12. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  13. Issues in implementing services for a wireless web-enabled digital camera

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas

    2001-05-01

    The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.

  14. Works starts on building world's largest digital camera

    NASA Astrophysics Data System (ADS)

    Kruesi, Liz

    2015-10-01

    The $473m Large Synoptic Survey Telescope (LSST) has moved one step closer to completion after the US Department of Energy (DOE) approved the start of construction for the telescope's $168m 3.2-gigapixel camera.

  15. Fast measurement of temporal noise of digital camera's photosensors

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.

    2015-10-01

    Currently photo- and videocameras are widespread parts of both scientific experimental setups and consumer applications. They are used in optics, radiophysics, astrophotography, chemistry, and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photoand videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Spatial part usually several times lower in magnitude than temporal. At first approximation spatial noises might be neglected. Earlier we proposed modification of the automatic segmentation of non-uniform targets (ASNT) method for measurement of temporal noise of photo- and videocameras. Only two frames are sufficient for noise measurement with the modified method. In result, proposed ASNT modification should allow fast and accurate measurement of temporal noise. In this paper, we estimated light and dark temporal noises of four cameras of different types using the modified ASNT method with only several frames. These cameras are: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PLB781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. We measured elapsed time for processing of shots used for temporal noise estimation. The results demonstrate the possibility of fast obtaining of dependency of camera full temporal noise on signal value with the proposed ASNT modification.

  16. Measuring magnification of virtual images using digital cameras

    NASA Astrophysics Data System (ADS)

    Kutzner, Mickey D.; Snelling, Samantha

    2016-11-01

    The concept of virtual images and why they lead to angular rather than linear magnification of optical images can be vague for students. A particularly straightforward method of obtaining quantitative magnification for simple magnifiers, microscopes, and telescopes uses the technology that students carry in their backpacks—their camera phones.

  17. Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.

    2000-01-01

    This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.

  18. DigiCam: fully digital compact camera for SST-1M telescope

    NASA Astrophysics Data System (ADS)

    Aguilar, J. A.; Bilnik, W.; Bogacz, L.; Bulik, T.; Christov, A.; della Volpe, D.; Dyrda, M.; Frankowski, A.; Grudzinska, M.; Grygorczuk, J.; Heller, M.; Idźkowski, B.; Janiak, M.; Jamrozy, M.; Karczewski, M.; Kasperek, J.; Lyard, E.; Marszałek, A.; Michałowski, J.; Moderski, R.; Montaruli, T.; Neronov, A.; Nicolau-Kukliński, J.; Niemiec, J.; Ostrowski, M.; Paśko, P.; Płatos, Ł.; Prandini, E.; Pruchniewicz, R.; Rafalski, J.; Rajda, P. J.; Rameez, M.; Rataj, M.; Rupiński, M.; Rutkowski, K.; Seweryn, K.; Sidz, M.; Stawarz, Ł.; Stodulska, M.; Stodulski, M.; Tokarz, M.; Toscano, S.; Troyano Pujadas, I.; Walter, R.; Wawer, P.; Wawrzaszek, R.; Wiśniewski, L.; Zietara, K.; Ziółkowski, P.; Żychowski, P.

    2014-08-01

    The single mirror Small Size Telescopes (SST-1M), being built by a sub-consortium of Polish and Swiss Institutions of the CTA Consortium, will be equipped with a fully digital camera with a compact photodetector plane based on silicon photomultipliers. The internal trigger signal transmission overhead will be kept at low level by introducing a high level of integration. It will be achieved by massively deploying state-of-the-art multi-gigabit transceivers, beginning from the ADC flash converters, through the internal data and trigger signals transmission over backplanes and cables, to the camera's server 10Gb/s Ethernet links. Such approach will allow fitting the size and weight of the camera exactly to the SST-1M needs, still retaining the flexibility of a fully digital design. Such solution has low power consumption, high reliability and long lifetime. The concept of the camera will be described, along with some construction details and performance results.

  19. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters.

    PubMed

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-07-20

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels.

  20. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters

    PubMed Central

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-01-01

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264

  1. USGS QA Plan: Certification of digital airborne mapping products (1)

    USGS Publications Warehouse

    Christopherson, J.

    2007-01-01

    To facilitate acceptance of new digital technologies in aerial imaging and mapping, the US Geological Survey (USGS) and its partners have launched a Quality Assurance (QA) Plan for Digital Aerial Imagery. This should provide a foundation for the quality of digital aerial imagery and products. It introduces broader considerations regarding processes employed by aerial flyers in collecting, processing and delivering data, and provides training and information for US producers and users alike.

  2. Field Portable Digital Ophthalmoscope/Fundus Camera. Phase I

    DTIC Science & Technology

    1997-05-01

    robbing injuries and pathologies. Included are retinal detachments, laser damage, CMV retinitis , retinitis pigmentosa , glaucoma, tumors, and the like...RMI-S, Fort Detrick, Frederick, Maryland 21702-5012. 13. ABSTRACT (Maximum 200 Retinal imaging is key for diagnoses and treatment of various eye-sight...personnel, and generally only used by ophthalmologists or in hospital settings. The retinal camera of this project will revolutionize retinal imaging

  3. Applications of the Lambert W function to analyze digital camera sensors

    NASA Astrophysics Data System (ADS)

    Villegas, Daniel

    2014-05-01

    The Lambert W function is applied via Maple to analyze the operation of the modern digital camera sensors. The Lambert W function had been applied previously to understand the functioning of diodes and solar cells. The parallelism between the physics of solar cells and digital camera sensors will be exploited. Digital camera sensors use p-n photodiodes and such photodiodes can be studied using the Lambert W function. At general, the bulk transformation of light into photocurrent is described by an equivalent circuit which determines a dynamical equation to be solved using the Lambert W function. Specifically, in a camera senor, the precise measurement of light intensity by filtering through color filters is able to create a measurable photocurrent that is proportional to image point intensity; and such photocurrent is given in terms of the Lambert W function. It is claimed that the drift between neighboring photocells at long wavelengths affects the ability to resolve an image and such drift can be represented effectively using the Lambert W function. Also is conjectured that the recombination of charge carries in the digital sensors is connected to the notion of "noise" in photography and such "noise" could be described by certain combinations of Lambert W functions. Finally, it is suggested that the notion of bias, and varying the width of the depletion zone, has a relationship to the ISO "sped· of the camera sensor; and such relationship could be described using Lambert W functions.

  4. 2010 A Digital Odyssey: Exploring Document Camera Technology and Computer Self-Efficacy in a Digital Era

    ERIC Educational Resources Information Center

    Hoge, Robert Joaquin

    2010-01-01

    Within the sphere of education, navigating throughout a digital world has become a matter of necessity for the developing professional, as with the advent of Document Camera Technology (DCT). This study explores the pedagogical implications of implementing DCT; to see if there is a relationship between teachers' comfort with DCT and to the…

  5. Accurate measurement of spatial noise portraits of photosensors of digital cameras

    NASA Astrophysics Data System (ADS)

    Cheremkhin, P. A.; Evtikhiev, N. N.; Krasnov, V. V.; Kulakov, M. N.; Starikov, R. S.

    2016-08-01

    Method of measurement of accurate portraits of light and dark spatial noise of photosensors is described. The method consists of four steps: creation of spatially homogeneous illumination; shooting light and dark frames; digital processing and filtering. Unlike standard technique, this method uses iterative creation of spatially homogeneous illumination by display, compensation of photosensor dark spatial noise portrait and improved procedure of elimination of dark temporal noise. Portraits of light and dark spatial noise of photosensors of a scientific digital camera were found. Characteristics of the measured portraits were compared with values of photo response and dark signal non-uniformities of camera's photosensor.

  6. Estimation of spectral distribution of sky radiance using a commercial digital camera.

    PubMed

    Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao

    2016-01-10

    Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.

  7. Lights, Camera, Reflection! Digital Movies: A Tool for Reflective Learning

    ERIC Educational Resources Information Center

    Genereux, Annie Prud'homme; Thompson, William A.

    2008-01-01

    At the end of a biology course entitled Ecology, Evolution, and Genetics, students were asked to consider how their learning experience had changed their perception of either ecology or genetics. Students were asked to express their thoughts in the form of a "digital story" using readily available software to create movies for the purpose of…

  8. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Astrophysics Data System (ADS)

    Friedman, Gary L.

    1994-02-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  9. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1994-01-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  10. Airborne multispectral identification of individual cotton plants using consumer-grade cameras

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Although multispectral remote sensing using consumer-grade cameras has successfully identified fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants. The imaging sensor of consumer-grade cameras are based on a Bayer patter...

  11. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  12. 75 FR 7519 - In the Matter of Certain Digital Cameras; Notice of Commission Determination Not To Review an...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-19

    ... COMMISSION In the Matter of Certain Digital Cameras; Notice of Commission Determination Not To Review an..., based on a complaint filed by Samsung Electronics Co., Ltd. of Korea and Samsung Electronics America... importation, or the sale within the United States after importation of certain digital cameras by reason...

  13. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test

    PubMed Central

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-01-01

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930

  14. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    ERIC Educational Resources Information Center

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  15. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  16. Ka-band Digitally Beamformed Airborne Radar Using SweepSAR Technique

    NASA Technical Reports Server (NTRS)

    Sadowy, Gregory A.; Chuang, Chung-Lun; Ghaemi, Hirad; Heavey, Brandon A.; Lin, Lung-Sheng S.; Quaddus, Momin

    2012-01-01

    A paper describes a frequency-scaled SweepSAR demonstration that operates at Ka-Band (35.6 GHz), and closely approximates the DESDynl mission antenna geometry, scaled by 28. The concept relies on the SweepSAR measurement technique. An array of digital receivers captures waveforms from a multiplicity of elements. These are combined using digital beamforming in elevation and SAR processing to produce imagery. Ka-band (35.6 GHz) airborne SweepSAR using array-fed reflector and digital beamforming features eight simultaneous receive beams generated by a 40-cm offset-fed reflector and eight-element active array feed, and eight digital receiver channels with all raw data recorded and later used for beamforming. Illumination of the swath is accomplished using a slotted-waveguide antenna radiating 250 W peak power. This experiment has been used to demonstrate digital beamforming SweepSAR systems.

  17. Development of an XYZ Digital Camera with Embedded Color Calibration System for Accurate Color Acquisition

    NASA Astrophysics Data System (ADS)

    Kretkowski, Maciej; Jablonski, Ryszard; Shimodaira, Yoshifumi

    Acquisition of accurate colors is important in the modern era of widespread exchange of electronic multimedia. The variety of device-dependent color spaces causes troubles with accurate color reproduction. In this paper we present the outlines of accomplished digital camera system with device-independent output formed from tristimulus XYZ values. The outstanding accuracy and fidelity of acquired color is achieved in our system by employing an embedded color calibration system based on emissive device generating reference calibration colors with user-defined spectral distribution and chromaticity coordinates. The system was tested by calibrating the camera using 24 reference colors spectrally reproduced from 24 color patches of the Macbeth Chart. The average color difference (CIEDE2000) has been found to be ΔE =0.83, which is an outstanding result compared to commercially available digital cameras.

  18. Color calibration of a CMOS digital camera for mobile imaging

    NASA Astrophysics Data System (ADS)

    Eliasson, Henrik

    2010-01-01

    As white balance algorithms employed in mobile phone cameras become increasingly sophisticated by using, e.g., elaborate white-point estimation methods, a proper color calibration is necessary. Without such a calibration, the estimation of the light source for a given situation may go wrong, giving rise to large color errors. At the same time, the demands for efficiency in the production environment require the calibration to be as simple as possible. Thus it is important to find the correct balance between image quality and production efficiency requirements. The purpose of this work is to investigate camera color variations using a simple model where the sensor and IR filter are specified in detail. As input to the model, spectral data of the 24-color Macbeth Colorchecker was used. This data was combined with the spectral irradiance of mainly three different light sources: CIE A, D65 and F11. The sensor variations were determined from a very large population from which 6 corner samples were picked out for further analysis. Furthermore, a set of 100 IR filters were picked out and measured. The resulting images generated by the model were then analyzed in the CIELAB space and color errors were calculated using the ΔE94 metric. The results of the analysis show that the maximum deviations from the typical values are small enough to suggest that a white balance calibration is sufficient. Furthermore, it is also demonstrated that the color temperature dependence is small enough to justify the use of only one light source in a production environment.

  19. Digital data from the Great Sand Dunes airborne gravity gradient survey, south-central Colorado

    USGS Publications Warehouse

    Drenth, B.J.; Abraham, J.D.; Grauch, V.J.S.; Labson, V.F.; Hodges, G.

    2013-01-01

    This report contains digital data and supporting explanatory files describing data types, data formats, and survey procedures for a high-resolution airborne gravity gradient (AGG) survey at Great Sand Dunes National Park, Alamosa and Saguache Counties, south-central Colorado. In the San Luis Valley, the Great Sand Dunes survey covers a large part of Great Sand Dunes National Park and Preserve. The data described were collected from a high-resolution AGG survey flown in February 2012, by Fugro Airborne Surveys Corp., on contract to the U.S. Geological Survey. Scientific objectives of the AGG survey are to investigate the subsurface structural framework that may influence groundwater hydrology and seismic hazards, and to investigate AGG methods and resolution using different flight specifications. Funding was provided by an airborne geophysics training program of the U.S. Department of Defense's Task Force for Business & Stability Operations.

  20. X-ray imaging using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Winch, N. M.; Edgar, A.

    2011-10-01

    The recent advancements in consumer-grade digital camera technology and the introduction of high-resolution, high sensitivity CsBr:Eu 2+ storage phosphor imaging plates make possible a new cost-effective technique for X-ray imaging. The imaging plate is bathed with red stimulating light by high-intensity light-emitting diodes, and the photostimulated image is captured with a digital single-lens reflex (SLR) camera. A blue band-pass optical filter blocks the stimulating red light but transmits the blue photostimulated luminescence. Using a Canon D5 Mk II camera and an f1.4 wide-angle lens, the optical image of a 240×180 mm 2 Konica CsBr:Eu 2+ imaging plate from a position 230 mm in front of the camera lens can be focussed so as to laterally fill the 35×23.3 mm 2 camera sensor, and recorded in 2808×1872 pixel elements, corresponding to an equivalent pixel size on the plate of 88 μm. The analogue-to-digital conversion from the camera electronics is 13 bits, but the dynamic range of the imaging system as a whole is limited in practice by noise to about 2.5 orders of magnitude. The modulation transfer function falls to 0.2 at a spatial frequency of 2.2 line pairs/mm. The limiting factor of the spatial resolution is light scattering in the plate rather than the camera optics. The limiting factors for signal-to-noise ratio are shot noise in the light, and dark noise in the CMOS sensor. Good quality images of high-contrast objects can be recorded with doses of approximately 1 mGy. The CsBr:Eu 2+ plate has approximately three times the readout sensitivity of a similar BaFBr:Eu 2+ plate.

  1. A versatile digital camera trigger for telescopes in the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Schwanke, U.; Shayduk, M.; Sulanke, K.-H.; Vorobiov, S.; Wischnewski, R.

    2015-05-01

    This paper describes the concept of an FPGA-based digital camera trigger for imaging atmospheric Cherenkov telescopes, developed for the future Cherenkov Telescope Array (CTA). The proposed camera trigger is designed to select images initiated by the Cherenkov emission of extended air showers from very-high energy (VHE, E > 20 GeV) photons and charged particles while suppressing signatures from background light. The trigger comprises three stages. A first stage employs programmable discriminators to digitize the signals arriving from the camera channels (pixels). At the second stage, a grid of low-cost FPGAs is used to process the digitized signals for camera regions with 37 pixels. At the third stage, trigger conditions found independently in any of the overlapping 37-pixel regions are combined into a global camera trigger by few central FPGAs. Trigger prototype boards based on Xilinx FPGAs have been designed, built and tested and were shown to function properly. Using these components a full camera trigger with a power consumption and price per channel of about 0.5 W and 19 €, respectively, can be built. With the described design the camera trigger algorithm can take advantage of pixel information in both the space and the time domain allowing, for example, the creation of triggers sensitive to the time-gradient of a shower image; the time information could also be exploited to online adjust the time window of the acquisition system for pixel data. Combining the results of the parallel execution of different trigger algorithms (optimized, for example, for the lowest and highest energies, respectively) on each FPGA can result in a better response over all photons energies (as demonstrated by Monte Carlo simulation in this work).

  2. Comparison of mosaicking techniques for airborne images from consumer-grade cameras

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Images captured from airborne imaging systems have the advantages of relatively low cost, high spatial resolution, and real/near-real-time availability. Multiple images taken from one or more flight lines could be used to generate a high-resolution mosaic image, which could be useful for diverse rem...

  3. Evaluation of natural color and color infrared digital cameras as a remote sensing tool for natural resource management

    NASA Astrophysics Data System (ADS)

    Bobbe, Thomas J.; McKean, Jim; Zigadlo, Joseph P.

    1995-09-01

    Digital cameras are a recent development in electronic imaging that provide a unique capability to acquire high resolution digital imagery in near real-time. The USDA Forest Service Nationwide Forestry Applications Program has recently evaluated natural color and color infrared digital camera systems as a remote sensing tool to collect resource information. Digital cameras are well suited for small projects and complement the use of other remote sensing systems to perform environmental monitoring, sample surveys and accuracy assessments, and update geographic information systems (GIS) data bases.

  4. Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean

    PubMed Central

    Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave

    2009-01-01

    Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729

  5. Arthropod eye-inspired digital camera with unique imaging characteristics

    NASA Astrophysics Data System (ADS)

    Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.

    2014-06-01

    In nature, arthropods have a remarkably sophisticated class of imaging systems, with a hemispherical geometry, a wideangle field of view, low aberrations, high acuity to motion and an infinite depth of field. There are great interests in building systems with similar geometries and properties due to numerous potential applications. However, the established semiconductor sensor technologies and optics are essentially planar, which experience great challenges in building such systems with hemispherical, compound apposition layouts. With the recent advancement of stretchable optoelectronics, we have successfully developed strategies to build a fully functional artificial apposition compound eye camera by combining optics, materials and mechanics principles. The strategies start with fabricating stretchable arrays of thin silicon photodetectors and elastomeric optical elements in planar geometries, which are then precisely aligned and integrated, and elastically transformed to hemispherical shapes. This imaging device demonstrates nearly full hemispherical shape (about 160 degrees), with densely packed artificial ommatidia. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. We have illustrated key features of operation of compound eyes through experimental imaging results and quantitative ray-tracing-based simulations. The general strategies shown in this development could be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).

  6. Development and Utilization of High Precision Digital Elevation Data taken by Airborne Laser Scanner

    NASA Astrophysics Data System (ADS)

    Akutsu, Osamu; Ohta, Masataka; Isobe, Tamio; Ando, Hisamitsu, Noguchi, Takahiro; Shimizu, Masayuki

    2005-03-01

    Disasters caused by heavy rain in urban areas bring a damage such as chaos in the road and railway transport systems, power failure, breakdown of the telephone system and submersion of built up areas, subways and underground shopping arcades, etc. It is important to obtain high precision elevation data which shows the detailed landform because a slight height difference affects damages by flood very considerably. Therefore, The Geographical Survey Institute (GSI) is preparing 5m grid digital terrain model (DTM) based on precise ground elevation data taken by using airborne laser scanner. This paper describes the process and an example of the use of a 5m grid digital data set.

  7. A pilot study of digital camera resolution metrology protocols proposed under ISO 12233, edition 2

    NASA Astrophysics Data System (ADS)

    Williams, Don; Wueller, Dietmar; Matherson, Kevin; Yoshida, Hideaka; Hubel, Paul

    2008-01-01

    Edition 2 of ISO 12233, Resolution and Spatial Frequency Response (SFR) for Electronic Still Picture Imaging, is likely to offer a choice of techniques for determining spatial resolution for digital cameras different from the initial standard. These choices include 1) the existing slanted-edge gradient SFR protocols but with low contrast features, 2) polar coordinate sine wave SFR technique using a Siemens star element, and 3) visual resolution threshold criteria using a continuous linear spatial frequency bar pattern features. A comparison of these methods will be provided. To establish the level of consistency between the results of these methods, theoretical and laboratory experiments were performed by members of ISO TC42/WG18 committee. Test captures were performed on several consumer and SLR digital cameras using the on-board image processing pipelines. All captures were done in a single session using the same lighting conditions and camera operator. Generally, there was good conformance between methods albeit with some notable differences. Speculation on the reason for these differences and how this can be diagnostic in digital camera evaluation will be offered.

  8. Comparison of fractional vegetation cover derived from digital camera and MODIS NDVI in Mongolia

    NASA Astrophysics Data System (ADS)

    Jaebeom, K.; Jang, K.; Kang, S.

    2014-12-01

    Satellite remote sensing can continuously observe the land surface vegetation with repetitive error over large area, though it requires complex processes to correct errors occurred from atmosphere and topography. On the other hand, the imageries captured by digital camera provide several benefits such as high spatial resolution, simple shooting method, and relatively low-priced instrument. Furthermore, digital camera has less of atmospheric effect such as path radiance than satellite imagery, and have advantage of the shooting with actual land cover. The objective of this study is the comparison of fractional vegetation cover derived from digital camera and MODIS Normalized Difference Vegetation Index (NDVI) in Mongolia. 670 imageries for the above ground including green leaves and soil surface captured by digital camera at the 134 sites in Mongolia from 2011 to 2014 were used to classify the vegetation cover fraction. Thirteen imageries captured by Mongolia and South Korea were selected to determine the best classification method. Various classification methods including the 4 supervised classifications, 2 unsupervised classifications, and histogram methods were used to separate the green vegetation in camera imageries that were converted to two color spaces such as Red-Green-Blue (RGB) and Hue-Intensity-Saturation (HIS). Those results were validated using the manually counted dataset from the local plant experts. The maximum likelihood classification (MLC) with HIS color space among classification methods showed a good agreement with manually counted dataset. The correlation coefficient and the root mean square error were 1.008 and 7.88%, respectively. Our preliminary result indicates that the MLC with HIS color space has a potential to classify the green vegetation in Mongolia.

  9. Feasibility of an airborne TV camera as a size spectrometer for cloud droplets in daylight.

    PubMed

    Roscoe, H K; Lachlan-Cope, T A; Roscoe, J

    1999-01-20

    Photographs of clouds taken with a camera with a large aperture ratio must have a short depth of focus to resolve small droplets. Hence the sampling volume is small, which limits the number of droplets and gives rise to a large statistical error on the number counted. However, useful signals can be obtained with a small aperture ratio, which allows for a sample volume large enough for counting cloud droplets at aircraft speeds with useful spatial resolution. The signal is sufficient to discriminate against noise from a sunlit cloud as background, provided the bandwidth of the light source and camera are restricted, and against readout noise. Hence, in principle, an instrument to sample the size distribution of cloud droplets from aircraft in daylight can be constructed from a simple TV camera and an array of laser diodes, without any components or screens external to the aircraft window.

  10. Combining laser scan and photogrammetry for 3D object modeling using a single digital camera

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Zhang, Hong; Zhang, Xiangwei

    2009-07-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory results. Although many research works have been done on how to combine the results of the two methods, no work has been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to determine the position of the laser. The laser scan results in dense points cloud which can be aligned together automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design

  11. Temporal monitoring of groundcover change using digital cameras

    NASA Astrophysics Data System (ADS)

    Zerger, A.; Gobbett, D.; Crossman, C.; Valencia, P.; Wark, T.; Davies, M.; Handcock, R. N.; Stol, J.

    2012-10-01

    This paper describes the development and testing of an automated method for detecting change in groundcover vegetation in response to kangaroo grazing using visible wavelength digital photography. The research is seen as a precursor to the future deployment of autonomous vegetation monitoring systems (environmental sensor networks). The study was conducted over six months with imagery captured every 90 min and post-processed using supervised image processing techniques. Synchronous manual assessments of groundcover change were also conducted to evaluate the effectiveness of the automated procedures. Results show that for particular cover classes such as Live Vegetation and Bare Ground, there is excellent temporal concordance between automated and manual methods. However, litter classes were difficult to consistently differentiate. A limitation of the method is the inability to effectively deal with change in the vertical profile of groundcover. This indicates that the three dimensional structure related to species composition and plant traits play an important role in driving future experimental designs. The paper concludes by providing lessons for conducting future groundcover monitoring experiments.

  12. Recovering fluorescent spectra with an RGB digital camera and color filters using different matrix factorizations.

    PubMed

    Nieves, Juan L; Valero, Eva M; Hernández-Andrés, Javier; Romero, Javier

    2007-07-01

    The aim of a multispectral system is to recover a spectral function at each image pixel, but when a scene is digitally imaged under a light of unknown spectral power distribution (SPD), the image pixels give incomplete information about the spectral reflectances of objects in the scene. We have analyzed how accurately the spectra of artificial fluorescent light sources can be recovered with a digital CCD camera. The red-green-blue (RGB) sensor outputs are modified by the use of successive cutoff color filters. Four algorithms for simplifying the spectra datasets are used: nonnegative matrix factorization (NMF), independent component analysis (ICA), a direct pseudoinverse method, and principal component analysis (PCA). The algorithms are tested using both simulated data and data from a real RGB digital camera. The methods are compared in terms of the minimum rank of factorization and the number of sensors required to derive acceptable spectral and colorimetric SPD estimations; the PCA results are also given for the sake of comparison. The results show that all the algorithms surpass the PCA when a reduced number of sensors is used. The experimental results suggest a significant loss of quality when more than one color filter is used, which agrees with the previous results for reflectances. Nevertheless, an RGB digital camera with or without a prefilter is found to provide good spectral and colorimetric recovery of indoor fluorescent lighting and can be used for color correction without the need of a telespectroradiometer.

  13. CMOS image sensor noise reduction method for image signal processor in digital cameras and camera phones

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, SeongDeok; Choe, Wonhee; Kim, Chang-Yong

    2007-02-01

    Digital images captured from CMOS image sensors suffer Gaussian noise and impulsive noise. To efficiently reduce the noise in Image Signal Processor (ISP), we analyze noise feature for imaging pipeline of ISP where noise reduction algorithm is performed. The Gaussian noise reduction and impulsive noise reduction method are proposed for proper ISP implementation in Bayer domain. The proposed method takes advantage of the analyzed noise feature to calculate noise reduction filter coefficients. Thus, noise is adaptively reduced according to the scene environment. Since noise is amplified and characteristic of noise varies while the image sensor signal undergoes several image processing steps, it is better to remove noise in earlier stage on imaging pipeline of ISP. Thus, noise reduction is carried out in Bayer domain on imaging pipeline of ISP. The method is tested on imaging pipeline of ISP and images captured from Samsung 2M CMOS image sensor test module. The experimental results show that the proposed method removes noise while effectively preserves edges.

  14. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  15. A two-camera imaging system for pest detection and aerial application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This presentation reports on the design and testing of an airborne two-camera imaging system for pest detection and aerial application assessment. The system consists of two digital cameras with 5616 x 3744 effective pixels. One camera captures normal color images with blue, green and red bands, whi...

  16. Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera

    NASA Astrophysics Data System (ADS)

    Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.

    2016-08-01

    Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.

  17. Application of phase matching autofocus in airborne long-range oblique photography camera

    NASA Astrophysics Data System (ADS)

    Petrushevsky, Vladimir; Guberman, Asaf

    2014-06-01

    The Condor2 long-range oblique photography (LOROP) camera is mounted in an aerodynamically shaped pod carried by a fast jet aircraft. Large aperture, dual-band (EO/MWIR) camera is equipped with TDI focal plane arrays and provides high-resolution imagery of extended areas at long stand-off ranges, at day and night. Front Ritchey-Chretien optics is made of highly stable materials. However, the camera temperature varies considerably in flight conditions. Moreover, a composite-material structure of the reflective objective undergoes gradual dehumidification in dry nitrogen atmosphere inside the pod, causing some small decrease of the structure length. The temperature and humidity effects change a distance between the mirrors by just a few microns. The distance change is small but nevertheless it alters the camera's infinity focus setpoint significantly, especially in the EO band. To realize the optics' resolution potential, the optimal focus shall be constantly maintained. In-flight best focus calibration and temperature-based open-loop focus control give mostly satisfactory performance. To get even better focusing precision, a closed-loop phase-matching autofocus method was developed for the camera. The method makes use of an existing beamsharer prism FPA arrangement where aperture partition exists inherently in an area of overlap between the adjacent detectors. The defocus is proportional to an image phase shift in the area of overlap. Low-pass filtering of raw defocus estimate reduces random errors related to variable scene content. Closed-loop control converges robustly to precise focus position. The algorithm uses the temperature- and range-based focus prediction as an initial guess for the closed-loop phase-matching control. The autofocus algorithm achieves excellent results and works robustly in various conditions of scene illumination and contrast.

  18. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  19. Project PANOPTES: a citizen-scientist exoplanet transit survey using commercial digital cameras

    NASA Astrophysics Data System (ADS)

    Gee, Wilfred T.; Guyon, Olivier; Walawender, Josh; Jovanovic, Nemanja; Boucher, Luc

    2016-08-01

    Project PANOPTES (http://www.projectpanoptes.org) is aimed at establishing a collaboration between professional astronomers, citizen scientists and schools to discover a large number of exoplanets with the transit technique. We have developed digital camera based imaging units to cover large parts of the sky and look for exoplanet transits. Each unit costs approximately $5000 USD and runs automatically every night. By using low-cost, commercial digital single-lens reflex (DSLR) cameras, we have developed a uniquely cost-efficient system for wide field astronomical imaging, offering approximately two orders of magnitude better etendue per unit of cost than professional wide-field surveys. Both science and outreach, our vision is to have thousands of these units built by schools and citizen scientists gathering data, making this project the most productive exoplanet discovery machine in the world.

  20. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  1. Unsupervised illuminant estimation from natural scenes: an RGB digital camera suffices.

    PubMed

    Nieves, Juan L; Plata, Clara; Valero, Eva M; Romero, Javier

    2008-07-10

    A linear pseudo-inverse method for unsupervised illuminant recovery from natural scenes is presented. The algorithm, which uses a digital RGB camera, selects the naturally occurring bright areas (not necessarily the white ones) in natural images and converts the RGB digital counts directly into the spectral power distribution of the illuminants using a learning-based spectral procedure. Computations show a good spectral and colorimetric performance when only three sensors (a three-band RGB camera) are used. These results go against previous findings concerning the recovery of spectral reflectances and radiances, which claimed that the greater the number of sensors, the better the spectral performance. Combining the device with the appropriate computations can yield spectral information about objects and illuminants simultaneously, avoiding the need for spectroradiometric measurements. The method works well and needs neither a white reference located in the natural scene nor direct measurements of the spectral power distribution of the light.

  2. Digital camera workflow for high dynamic range images using a model of retinal processing

    NASA Astrophysics Data System (ADS)

    Tamburrino, Daniel; Alleysson, David; Meylan, Laurence; Süsstrunk, Sabine

    2008-02-01

    We propose a complete digital camera workflow to capture and render high dynamic range (HDR) static scenes, from RAW sensor data to an output-referred encoded image. In traditional digital camera processing, demosaicing is one of the first operations done after scene analysis. It is followed by rendering operations, such as color correction and tone mapping. In our workflow, which is based on a model of retinal processing, most of the rendering steps are performed before demosaicing. This reduces the complexity of the computation, as only one third of the pixels are processed. This is especially important as our tone mapping operator applies local and global tone corrections, which is usually needed to well render high dynamic scenes. Our algorithms efficiently process HDR images with different keys and different content.

  3. Long-Term Tracking of a Specific Vehicle Using Airborne Optical Camera Systems

    NASA Astrophysics Data System (ADS)

    Kurz, F.; Rosenbaum, D.; Runge, H.; Cerra, D.; Mattyus, G.; Reinartz, P.

    2016-06-01

    In this paper we present two low cost, airborne sensor systems capable of long-term vehicle tracking. Based on the properties of the sensors, a method for automatic real-time, long-term tracking of individual vehicles is presented. This combines the detection and tracking of the vehicle in low frame rate image sequences and applies the lagged Cell Transmission Model (CTM) to handle longer tracking outages occurring in complex traffic situations, e.g. tunnels. The CTM model uses the traffic conditions in the proximities of the target vehicle and estimates its motion to predict the position where it reappears. The method is validated on an airborne image sequence acquired from a helicopter. Several reference vehicles are tracked within a range of 500m in a complex urban traffic situation. An artificial tracking outage of 240m is simulated, which is handled by the CTM. For this, all the vehicles in the close proximity are automatically detected and tracked to estimate the basic density-flow relations of the CTM model. Finally, the real and simulated trajectories of the reference vehicles in the outage are compared showing good correspondence also in congested traffic situations.

  4. Comparison of mosaicking techniques for airborne images from consumer-grade cameras

    NASA Astrophysics Data System (ADS)

    Song, Huaibo; Yang, Chenghai; Zhang, Jian; Hoffmann, Wesley Clint; He, Dongjian; Thomasson, J. Alex

    2016-01-01

    Images captured from airborne imaging systems can be mosaicked for diverse remote sensing applications. The objective of this study was to identify appropriate mosaicking techniques and software to generate mosaicked images for use by aerial applicators and other users. Three software packages-Photoshop CC, Autostitch, and Pix4Dmapper-were selected for mosaicking airborne images acquired from a large cropping area. Ground control points were collected for georeferencing the mosaicked images and for evaluating the accuracy of eight mosaicking techniques. Analysis and accuracy assessment showed that Pix4Dmapper can be the first choice if georeferenced imagery with high accuracy is required. The spherical method in Photoshop CC can be an alternative for cost considerations, and Autostitch can be used to quickly mosaic images with reduced spatial resolution. The results also showed that the accuracy of image mosaicking techniques could be greatly affected by the size of the imaging area or the number of the images and that the accuracy would be higher for a small area than for a large area. The results from this study will provide useful information for the selection of image mosaicking software and techniques for aerial applicators and other users.

  5. Measuring the Orbital Period of the Moon Using a Digital Camera

    ERIC Educational Resources Information Center

    Hughes, Stephen W.

    2006-01-01

    A method of measuring the orbital velocity of the Moon around the Earth using a digital camera is described. Separate images of the Moon and stars taken 24 hours apart were loaded into Microsoft PowerPoint and the centre of the Moon marked on each image. Four stars common to both images were connected together to form a "home-made" constellation.…

  6. Primary-consistent soft-decision color demosaicking for digital cameras (patent pending).

    PubMed

    Wu, Xiaolin; Zhang, Ning

    2004-09-01

    Color mosaic sampling schemes are widely used in digital cameras. Given the resolution of CCD sensor arrays, the image quality of digital cameras using mosaic sampling largely depends on the performance of the color demosaicking process. A common problem with existing color demosaicking algorithms is an inconsistency of sample interpolations in different primary color channels, which is the cause of the most objectionable color artifacts. To cure the problem, we propose a new primary-consistent soft-decision framework (PCSD) of color demosaicking. In the PCSD framework, we make multiple estimates of a missing color sample under different hypotheses on edge or texture directions. The estimates are made via a primary consistent interpolation, meaning that all three primary components of a color are interpolated in the same direction. The final estimate of a color sample is obtained by testing different interpolation hypotheses in the reconstructed full-resolution color image and selecting the best via an optimal statistical decision or inference process. A concrete color demosaicking method of the PCSD framework is presented. This new method eliminates certain types of color artifacts of existing color demosaicking methods. Extensive experimental results demonstrate that the PCSD approach can significantly improve the image quality of digital cameras in both subjective and objective measures. In some instances, our gain over the competing methods can be as much as 7 dB.

  7. Two detector, active digital holographic camera for 3D imaging and digital holographic interferometry

    NASA Astrophysics Data System (ADS)

    Żak, Jakub; Kujawińska, Małgorzata; Józwik, Michał

    2015-09-01

    In this paper we present the novel design and proof of concept of an active holographic camera consisting of two array detectors and Liquid Crystal on Silicon (LCOS) Spatial Light Modulator (SLM). The device allows sequential or simultaneous capture of two Fresnel holograms of 3D object/scene. The two detectors configuration provides an increased viewing angle of the camera, allows to capture two double exposure holograms with different sensitivity vectors and even facilitate capturing a synthetic aperture hologram for static objects. The LCOS SLM, located in a reference arm, serves as an active element, which enables phase shifting and proper pointing of reference beams towards both detectors in the configuration which allows miniaturization of the camera. The laboratory model of the camera has been tested for different modes of work namely for capture and reconstruction of 3D scene and for double exposure holographic interferometry applied for an engineering object under load. The future extension of the camera functionalities for Fourier holograms capture is discussed.

  8. Use of a new high-speed digital data acquisition system in airborne ice-sounding

    USGS Publications Warehouse

    Wright, David L.; Bradley, Jerry A.; Hodge, Steven M.

    1989-01-01

    A high-speed digital data acquisition and signal averaging system for borehole, surface, and airborne radio-frequency geophysical measurements was designed and built by the US Geological Survey. The system permits signal averaging at rates high enough to achieve significant signal-to-noise enhancement in profiling, even in airborne applications. The first field use of the system took place in Greenland in 1987 for recording data on a 150 by 150-km grid centered on the summit of the Greenland ice sheet. About 6000-line km were flown and recorded using the new system. The data can be used to aid in siting a proposed scientific corehole through the ice sheet.

  9. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    NASA Astrophysics Data System (ADS)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the

  10. Use of a Digital Camera to Monitor the Growth and Nitrogen Status of Cotton

    PubMed Central

    Jia, Biao; He, Haibing; Ma, Fuyu; Diao, Ming; Jiang, Guiying; Zheng, Zhong; Cui, Jin; Fan, Hua

    2014-01-01

    The main objective of this study was to develop a nondestructive method for monitoring cotton growth and N status using a digital camera. Digital images were taken of the cotton canopies between emergence and full bloom. The green and red values were extracted from the digital images and then used to calculate canopy cover. The values of canopy cover were closely correlated with the normalized difference vegetation index and the ratio vegetation index and were measured using a GreenSeeker handheld sensor. Models were calibrated to describe the relationship between canopy cover and three growth properties of the cotton crop (i.e., aboveground total N content, LAI, and aboveground biomass). There were close, exponential relationships between canopy cover and three growth properties. And the relationships for estimating cotton aboveground total N content were most precise, the coefficients of determination (R2) value was 0.978, and the root mean square error (RMSE) value was 1.479 g m−2. Moreover, the models were validated in three fields of high-yield cotton. The result indicated that the best relationship between canopy cover and aboveground total N content had an R2 value of 0.926 and an RMSE value of 1.631 g m−2. In conclusion, as a near-ground remote assessment tool, digital cameras have good potential for monitoring cotton growth and N status. PMID:24723817

  11. Greenness indices from digital cameras predict the timing and seasonal dynamics of canopy-scale photosynthesis.

    PubMed

    Toomey, Michael; Friedl, Mark A; Frolking, Steve; Hufkens, Koen; Klosterman, Stephen; Sonnentag, Oliver; Baldocchi, Dennis D; Bernacchi, Carl J; Biraud, Sebastien C; Bohrer, Gil; Brzostek, Edward; Burns, Sean P; Coursolle, Carole; Hollinger, David Y; Margolis, Hank A; Mccaughey, Harry; Monson, Russell K; Munger, J William; Pallardy, Stephen; Phillips, Richard P; Torn, Margaret S; Wharton, Sonia; Zeri, Marcelo; And, Andrew D; Richardson, Andrew D

    2015-01-01

    The proliferation of digital cameras co-located with eddy covariance instrumentation provides new opportunities to better understand the relationship between canopy phenology and the seasonality of canopy photosynthesis. In this paper we analyze the abilities and limitations of canopy color metrics measured by digital repeat photography to track seasonal canopy development and photosynthesis, determine phenological transition dates, and estimate intra-annual and interannual variability in canopy photosynthesis. We used 59 site-years of camera imagery and net ecosystem exchange measurements from 17 towers spanning three plant functional types (deciduous broadleaf forest, evergreen needleleaf forest, and grassland/crops) to derive color indices and estimate gross primary productivity (GPP). GPP was strongly correlated with greenness derived from camera imagery in all three plant functional types. Specifically, the beginning of the photosynthetic period in deciduous broadleaf forest and grassland/crops and the end of the photosynthetic period in grassland/crops were both correlated with changes in greenness; changes in redness were correlated with the end of the photosynthetic period in deciduous broadleaf forest. However, it was not possible to accurately identify the beginning or ending of the photosynthetic period using camera greenness in evergreen needleleaf forest. At deciduous broadleaf sites, anomalies in integrated greenness and total GPP were significantly correlated up to 60 days after the mean onset date for the start of spring. More generally, results from this work demonstrate that digital repeat photography can be used to quantify both the duration of the photosynthetically active period as well as total GPP in deciduous broadleaf forest and grassland/crops, but that new and different approaches are required before comparable results can be achieved in evergreen needleleaf forest.

  12. Whole-field thickness strain measurement using multiple camera digital image correlation system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Xie, Xin; Yang, Guobiao; Zhang, Boyang; Siebert, Thorsten; Yang, Lianxiang.

    2017-03-01

    Three Dimensional digital image correlation(3D-DIC) has been widely used by industry, especially for strain measurement. The traditional 3D-DIC system can accurately obtain the whole-field 3D deformation. However, the conventional 3D-DIC system can only acquire the displacement field on a single surface, thus lacking information in the depth direction. Therefore, the strain in the thickness direction cannot be measured. In recent years, multiple camera DIC (multi-camera DIC) systems have become a new research topic, which provides much more measurement possibility compared to the conventional 3D-DIC system. In this paper, a multi-camera DIC system used to measure the whole-field thickness strain is introduced in detail. Four cameras are used in the system. two of them are placed at the front side of the object, and the other two cameras are placed at the back side. Each pair of cameras constitutes a sub stereo-vision system and measures the whole-field 3D deformation on one side of the object. A special calibration plate is used to calibrate the system, and the information from these two subsystems is linked by the calibration result. Whole-field thickness strain can be measured using the information obtained from both sides of the object. Additionally, the major and minor strain on the object surface are obtained simultaneously, and a whole-field quasi 3D strain history is acquired. The theory derivation for the system, experimental process, and application of determining the thinning strain limit based on the obtained whole-field thickness strain history are introduced in detail.

  13. Teaching with Technology: Step Back and Hand over the Cameras! Using Digital Cameras to Facilitate Mathematics Learning with Young Children in K-2 Classrooms

    ERIC Educational Resources Information Center

    Northcote, Maria

    2011-01-01

    Digital cameras are now commonplace in many classrooms and in the lives of many children in early childhood centres and primary schools. They are regularly used by adults and teachers for "saving special moments and documenting experiences." The use of previously expensive photographic and recording equipment has often remained in the domain of…

  14. Determination of visual range during fog and mist using digital camera images

    NASA Astrophysics Data System (ADS)

    Taylor, John R.; Moogan, Jamie C.

    2010-08-01

    During the winter of 2008, daily time series of images of five "unit-cell chequerboard" targets were acquired using a digital camera. The camera and targets were located in the Majura Valley approximately 3 km from Canberra airport. We show how the contrast between the black and white sections of the targets is related to the meteorological range (or standard visual range), and compare estimates of this quantity derived from images acquired during fog and mist conditions with those from the Vaisala FD-12 visibility meter operated by the Bureau of Meteorology at Canberra Airport. The two sets of ranges are consistent but show the variability of visibility in the patchy fog conditions that often prevail in the Majura Valley. Significant spatial variations of the light extinction coefficient were found to occur over the longest 570 m optical path sampled by the imaging system. Visual ranges could be estimated out to ten times the distance to the furthest target, or approximately 6 km, in these experiments. Image saturation of the white sections of the targets was the major limitation on the quantitative interpretation of the images. In the future, the camera images will be processed in real time so that the camera exposure can be adjusted to avoid saturation.

  15. Improvements in remote cardiopulmonary measurement using a five band digital camera.

    PubMed

    McDuff, Daniel; Gontarek, Sarah; Picard, Rosalind W

    2014-10-01

    Remote measurement of the blood volume pulse via photoplethysmography (PPG) using digital cameras and ambient light has great potential for healthcare and affective computing. However, traditional RGB cameras have limited frequency resolution. We present results of PPG measurements from a novel five band camera and show that alternate frequency bands, in particular an orange band, allowed physiological measurements much more highly correlated with an FDA approved contact PPG sensor. In a study with participants (n = 10) at rest and under stress, correlations of over 0.92 (p 0.01) were obtained for heart rate, breathing rate, and heart rate variability measurements. In addition, the remotely measured heart rate variability spectrograms closely matched those from the contact approach. The best results were obtained using a combination of cyan, green, and orange (CGO) bands; incorporating red and blue channel observations did not improve performance. In short, RGB is not optimal for this problem: CGO is better. Incorporating alternative color channel sensors should not increase the cost of such cameras dramatically.

  16. An innovative silicon photomultiplier digitizing camera for gamma-ray astronomy

    NASA Astrophysics Data System (ADS)

    Heller, M.; Schioppa, E., Jr.; Porcelli, A.; Pujadas, I. Troyano; Ziȩtara, K.; Volpe, D. della; Montaruli, T.; Cadoux, F.; Favre, Y.; Aguilar, J. A.; Christov, A.; Prandini, E.; Rajda, P.; Rameez, M.; Bilnik, W.; Błocki, J.; Bogacz, L.; Borkowski, J.; Bulik, T.; Frankowski, A.; Grudzińska, M.; Idźkowski, B.; Jamrozy, M.; Janiak, M.; Kasperek, J.; Lalik, K.; Lyard, E.; Mach, E.; Mandat, D.; Marszałek, A.; Miranda, L. D. Medina; Michałowski, J.; Moderski, R.; Neronov, A.; Niemiec, J.; Ostrowski, M.; Paśko, P.; Pech, M.; Schovanek, P.; Seweryn, K.; Sliusar, V.; Skowron, K.; Stawarz, Ł.; Stodulska, M.; Stodulski, M.; Walter, R.; Wiȩcek, M.; Zagdański, A.

    2017-01-01

    The single-mirror small-size telescope (SST-1M) is one of the three proposed designs for the small-size telescopes (SSTs) of the Cherenkov Telescope Array (CTA) project. The SST-1M will be equipped with a 4 m-diameter segmented reflector dish and an innovative fully digital camera based on silicon photo-multipliers. Since the SST sub-array will consist of up to 70 telescopes, the challenge is not only to build telescopes with excellent performance, but also to design them so that their components can be commissioned, assembled and tested by industry. In this paper we review the basic steps that led to the design concepts for the SST-1M camera and the ongoing realization of the first prototype, with focus on the innovative solutions adopted for the photodetector plane and the readout and trigger parts of the camera. In addition, we report on results of laboratory measurements on real scale elements that validate the camera design and show that it is capable of matching the CTA requirements of operating up to high moonlight background conditions.

  17. Assessing the application of an airborne intensified multispectral video camera to measure chlorophyll a in three Florida estuaries

    SciTech Connect

    Dierberg, F.E.; Zaitzeff, J.

    1997-08-01

    After absolute and spectral calibration, an airborne intensified, multispectral video camera was field tested for water quality assessments over three Florida estuaries (Tampa Bay, Indian River Lagoon, and the St. Lucie River Estuary). Univariate regression analysis of upwelling spectral energy vs. ground-truthed uncorrected chlorophyll a (Chl a) for each estuary yielded lower coefficients of determination (R{sup 2}) with increasing concentrations of Gelbstoff within an estuary. More predictive relationships were established by adding true color as a second independent variable in a bivariate linear regression model. These regressions successfully explained most of the variation in upwelling light energy (R{sup 2}=0.94, 0.82 and 0.74 for the Tampa Bay, Indian River Lagoon, and St. Lucie estuaries, respectively). Ratioed wavelength bands within the 625-710 nm range produced the highest correlations with ground-truthed uncorrected Chl a, and were similar to those reported as being the most predictive for Chl a in Tennessee reservoirs. However, the ratioed wavebands producing the best predictive algorithms for Chl a differed among the three estuaries due to the effects of varying concentrations of Gelbstoff on upwelling spectral signatures, which precluded combining the data into a common data set for analysis.

  18. Retrieval of water quality algorithms from airborne HySpex camera for oxbow lakes in north-eastern Poland

    NASA Astrophysics Data System (ADS)

    Slapinska, Malgorzata; Berezowski, Tomasz; Frąk, Magdalena; Chormański, Jarosław

    2016-04-01

    The aim of this study was to retrieve empirical formulas for water quality of oxbow lakes in Lower Biebrza Basin (river located in NE Poland) using HySpex airborne imaging spectrometer. Biebrza River is one of the biggest wetland in Europe. It is characterised by low contamination level and small human influence. Because of those characteristics Biebrza River can be treated as a reference area for other floodplains and fen ecosystem in Europe. Oxbow lakes are important part of Lower Biebrza Basin due to their retention and habitat function. Hyperspectral remote sensing data were acquired by the HySpex sensor (which covers the range of 400-2500 nm) on 01-02.08.2015 with the ground measurements campaign conducted 03-04.08.2015. The ground measurements consisted of two parts. First part included spectral reflectance sampling with spectroradiometer ASD FieldSpec 3, which covered the wavelength range of 350-2500 nm at 1 nm intervals. In situ data were collected both for water and for specific objects within the area. Second part of the campaign included water parameters such as Secchi disc depth (SDD), electric conductivity (EC), pH, temperature and phytoplankton. Measured reflectance enabled empirical line atmospheric correction which was conducted for the HySpex data. Our results indicated that proper atmospheric correction was very important for further data analysis. The empirical formulas for our water parameters were retrieved based on reflecatance data. This study confirmed applicability of HySpex camera to retrieve water quality.

  19. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  20. Development of High Speed Digital Camera: EXILIM EX-F1

    NASA Astrophysics Data System (ADS)

    Nojima, Osamu

    The EX-F1 is a high speed digital camera featuring a revolutionary improvement in burst shooting speed that is expected to create entirely new markets. This model incorporates a high speed CMOS sensor and a high speed LSI processor. With this model, CASIO has achieved an ultra-high speed 60 frames per second (fps) burst rate for still images, together with 1,200 fps high speed movie that captures movements which cannot even be seen by human eyes. Moreover, this model can record movies at full High-Definition. After launching it into the market, it was able to get a lot of high appraisals as an innovation camera. We will introduce the concept, features and technologies about the EX-F1.

  1. Reflectance measurement using digital camera and a protecting dome with built in light source.

    PubMed

    Välisuo, Petri; Harju, Toni; Alander, Jarmo

    2011-08-01

    The reflectance of the skin reveals the chemical and physical changes of the skin as well as many metabolic changes. The reflectance measurement is an important method for medical diagnosis, follow-up and screening. This article concentrates on designing and validating an imaging system, based on a digital camera. The proposed system can measure the reflectance of the skin with high spatial and currently four channel spectral resolution, in the range of 450 nm to 980 nm. The accuracy of the system is determined by imaging a colour checker board and comparing the obtained values with both given values and spectrometer measurements. The diffuse interreflections of both, the integrating sphere and the lighting dome of the imaging system, is compensated with a correction factor. The accuracy of the proposed system is only slightly weaker than the spectrometer. The imaging system characteristics are independent of the camera characteristics.

  2. Geo-Referenced Mapping Using AN Airborne 3d Time-Of Camera

    NASA Astrophysics Data System (ADS)

    Kohoutek, T. K.; Nitsche, M.; Eisenbeiss, H.

    2011-09-01

    This paper presents the first experience of a close range bird's eye view photogrammetry with range imaging (RIM) sensors for the real time generation of high resolution geo-referenced 3D surface models. The aim of this study was to develop a mobile, versatile and less costly outdoor survey methodology to measure natural surfaces compared to the terrestrial laser scanning (TLS). Two commercial RIM cameras (SR4000 by MESA Imaging AG and a CamCube 2.0 by PMDTechnologies GmbH) were mounted on a lightweight crane and on an unmanned aerial vehicle (UAV). The field experiments revealed various challenges in real time deployment of the two state-of-the-art RIM systems, e.g. processing of the large data volume. Acquisition strategy and data processing and first measurements are presented. The precision of the measured distances is less than 1 cm for good conditions. However, the measurement precision degraded under the test conditions due to direct sunlight, strong illumination contrasts and helicopter vibrations.

  3. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  4. Cataract screening by minimally trained remote observer with non-mydriatic digital fundus camera

    NASA Astrophysics Data System (ADS)

    Choi, Ann; Hjelmstad, David; Taibl, Jessica N.; Sayegh, Samir I.

    2013-03-01

    We propose a method that allows an inexperienced observer, through the examination of the digital fundus image of a retina on a computer screen, to simply determine the presence of a cataract and the necessity to refer the patient for further evaluation. To do so, fundus photos obtained with a non-mydriatic camera were presented to an inexperienced observer that was briefly instructed on fundus imaging, nature of cataracts and their probable effect on the image of the retina and the use of a computer program presenting fundus image pairs. Preliminary results of pair testing indicate the method is very effective.

  5. Acquisition of Diagnostic Screen and Synchrotron Radiation Images Using IEEE1394 Digital Cameras

    NASA Astrophysics Data System (ADS)

    Rehm, G.

    2004-11-01

    In the LINAC, booster synchrotron and transfer lines of DIAMOND a number of screens (YAG:Ce and OTR) as well as synchrotron radiation ports will be used to acquire information about the transverse beam distribution. Digital IEEE1394 cameras have been selected for their range of sensor sizes and resolutions available, their easy triggering to single events, and their noise-free transmission of the images into the control system. Their suitability for use under influence of high-energy radiation has been verified. Images from preliminary tests at the SRS Daresbury are presented.

  6. Determination of the diffusion coefficient between corn syrup and distilled water using a digital camera

    NASA Astrophysics Data System (ADS)

    Ray, E.; Bunton, P.; Pojman, J. A.

    2007-10-01

    A simple technique for determining the diffusion coefficient between two miscible liquids is presented based on observing concentration-dependent ultraviolet-excited fluorescence using a digital camera. The ultraviolet-excited visible fluorescence of corn syrup is proportional to the concentration of the syrup. The variation of fluorescence with distance from the transition zone between the fluids is fit by the Fick's law solution to the diffusion equation. By monitoring the concentration at successive times, the diffusion coefficient can be determined in otherwise transparent materials. The technique is quantitative and makes measurement of diffusion accessible in the advanced undergraduate physics laboratory.

  7. Conversion from light to numerical signal in a digital camera pipeline

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2010-11-01

    The goal of this paper is to simulate the conversion from light to numerical signal which occurs during the image propagations through the digital camera pipeline. We focus on the spectral and resolution analysis of the optical system, the Bayer sampling, the photon shot and fixed pattern noise, the high dynamic range image, the amplitude and bilateral filters and the analog to digital conversion. The image capture system consists of a flash illumination source, a Cooke triplet photographic objective and a passive pixel CMOS sensor. We use a spectral image in order to simulate the illumination and the propagation of the light through the optical system components. The Fourier optics is used to compute the point spread function specific to each optical component. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system.

  8. Erosion research with a digital camera: the structure from motion method used in gully monitoring - field experiments from southern Morocco

    NASA Astrophysics Data System (ADS)

    Kaiser, Andreas; Rock, Gilles; Neugirg, Fabian; Müller, Christoph; Ries, Johannes

    2014-05-01

    From a geoscientific view arid or semiarid landscapes are often associated with soil degrading erosion processes and thus active geomorphology. In this regard gully incision represents one of the most important influences on surface dynamics. Established approaches to monitor and quantify soil loss require costly and labor-intensive measuring methods: terrestrial or airborne LiDAR scans to create digital elevation models and unmanned airborne vehicles for image acquisition provide adequate tools for geomorphological surveying. Despite their ever advancing abilities, they are finite with their applicability in detailed recordings of complex surfaces. Especially undercuttings and plunge pools in the headcut area of gully systems are invisible or cause shadowing effects. The presented work aims to apply and advance an adequate tool to avoid the above mentioned obstacles and weaknesses of the established methods. The emerging structure from motion-based high resolution 3D-visualisation not only proved to be useful in gully erosion. Moreover, it provides a solid ground for additional applications in geosciences such as surface roughness measurements, quantification of gravitational mass movements or capturing stream connectivity. During field campaigns in semiarid southern Morocco a commercial DSLR camera was used, to produce images that served as input data for software based point cloud and mesh generation. Thus, complex land surfaces could be reconstructed entirely in high resolution by photographing the object from different perspectives. In different scales the resulting 3D-mesh represents a holistic reconstruction of the actual shape complexity with its limits set only by computing capacity. Analysis and visualization of time series of different erosion-related events illustrate the additional benefit of the method. It opens new perspectives on process understanding that can be exploited by open source and commercial software. Results depicted a soil loss of 5

  9. Molecular Shocks Associated with Massive Young Stars: CO Line Images with a New Far-Infrared Spectroscopic Camera on the Kuiper Airborne Observatory

    NASA Technical Reports Server (NTRS)

    Watson, Dan M.

    1997-01-01

    Under the terms of our contract with NASA Ames Research Center, the University of Rochester (UR) offers the following final technical report on grant NAG 2-958, Molecular shocks associated with massive young stars: CO line images with a new far-infrared spectroscopic camera, given for implementation of the UR Far-Infrared Spectroscopic Camera (FISC) on the Kuiper Airborne Observatory (KAO), and use of this camera for observations of star-formation regions 1. Two KAO flights in FY 1995, the final year of KAO operations, were awarded to this program, conditional upon a technical readiness confirmation which was given in January 1995. The funding period covered in this report is 1 October 1994 - 30 September 1996. The project was supported with $30,000, and no funds remained at the conclusion of the project.

  10. The PANOPTES project: discovering exoplanets with low-cost digital cameras

    NASA Astrophysics Data System (ADS)

    Guyon, Olivier; Walawender, Josh; Jovanovic, Nemanja; Butterfield, Mike; Gee, Wilfred T.; Mery, Rawad

    2014-07-01

    The Panoptic Astronomical Networked OPtical observatory for Transiting Exoplanets Survey (PANOPTES, www.projectpanoptes.org) project is aimed at identifying transiting exoplanets using a wide network of low-cost imaging units. Each unit consists of two commercial digital single lens reflex (DSLR) cameras equipped with 85mm F1.4 lenses, mounted on a small equatorial mount. At a few $1000s per unit, the system offers a uniquely advantageous survey eficiency for the cost, and can easily be assembled by amateur astronomers or students. Three generations of prototype units have so far been tested, and the baseline unit design, which optimizes robustness, simplicity and cost, is now ready to be duplicated. We describe the hardware and software for the PANOPTES project, focusing on key challenging aspects of the project. We show that obtaining high precision photometric measurements with commercial DSLR color cameras is possible, using a PSF-matching algorithm we developed for this project. On-sky tests show that percent-level photometric precision is achieved in 1 min with a single camera. We also discuss hardware choices aimed at optimizing system robustness while maintaining adequate cost. PANOPTES is both an outreach project and a scientifically compelling survey for transiting exoplanets. In its current phase, experienced PANOPTES members are deploying a limited number of units, acquiring the experience necessary to run the network. A much wider community will then be able to participate to the project, with schools and citizen scientists integrating their units in the network.

  11. High-resolution image digitizing through 12x3-bit RGB-filtered CCD camera

    NASA Astrophysics Data System (ADS)

    Cheng, Andrew Y. S.; Pau, Michael C. Y.

    1996-09-01

    A high resolution computer-controlled CCD image capturing system is developed by using a 12 bits 1024 by 1024 pixels CCD camera and motorized RGB filters to grasp an image with color depth up to 36 bits. The filters distinguish the major components of color and collect them separately while the CCD camera maintains the spatial resolution and detector filling factor. The color separation can be done optically rather than electronically. The operation is simply by placing the capturing objects like color photos, slides and even x-ray transparencies under the camera system, the necessary parameters such as integration time, mixing level and light intensity are automatically adjusted by an on-line expert system. This greatly reduces the restrictions of the capturing species. This unique approach can save considerable time for adjusting the quality of image, give much more flexibility of manipulating captured object even if it is a 3D object with minimal setup fixers. In addition, cross sectional dimension of a 3D capturing object can be analyzed by adapting a fiber optic ring light source. It is particularly useful in non-contact metrology of a 3D structure. The digitized information can be stored in an easily transferable format. Users can also perform a special LUT mapping automatically or manually. Applications of the system include medical images archiving, printing quality control, 3D machine vision, and etc.

  12. Cloud Height Estimation with a Single Digital Camera and Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Carretas, Filipe; Janeiro, Fernando M.

    2014-05-01

    Clouds influence the local weather, the global climate and are an important parameter in the weather prediction models. Clouds are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Therefore it is important to develop low cost and robust systems that can be easily deployed in the field, enabling large scale acquisition of cloud parameters. Recently, the authors developed a low-cost system for the measurement of cloud base height using stereo-vision and digital photography. However, due to the stereo nature of the system, some challenges were presented. In particular, the relative camera orientation requires calibration and the two cameras need to be synchronized so that the photos from both cameras are acquired simultaneously. In this work we present a new system that estimates the cloud height between 1000 and 5000 meters. This prototype is composed by one digital camera controlled by a Raspberry Pi and is installed at Centro de Geofísica de Évora (CGE) in Évora, Portugal. The camera is periodically triggered to acquire images of the overhead sky and the photos are downloaded to the Raspberry Pi which forwards them to a central computer that processes the images and estimates the cloud height in real time. To estimate the cloud height using just one image requires a computer model that is able to learn from previous experiences and execute pattern recognition. The model proposed in this work is an Artificial Neural Network (ANN) that was previously trained with cloud features at different heights. The chosen Artificial Neural Network is a three-layer network, with six parameters in the input layer, 12 neurons in the hidden intermediate layer, and an output layer with only one output. The six input parameters are the average intensity values and the intensity standard deviation of each RGB channel. The output

  13. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  14. Estimating the spatial position of marine mammals based on digital camera recordings

    PubMed Central

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-01-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  15. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    USGS Publications Warehouse

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  16. A digital elevation model of the Greenland ice sheet and validation with airborne laser altimeter data

    NASA Technical Reports Server (NTRS)

    Bamber, Jonathan L.; Ekholm, Simon; Krabill, William B.

    1997-01-01

    A 2.5 km resolution digital elevation model (DEM) of the Greenland ice sheet was produced from the 336 days of the geodetic phase of ERS-1. During this period the altimeter was operating in ice-mode over land surfaces providing improved tracking around the margins of the ice sheet. Combined with the high density of tracks during the geodetic phase, a unique data set was available for deriving a DEM of the whole ice sheet. The errors present in the altimeter data were investigated via a comparison with airborne laser altimeter data obtained for the southern half of Greenland. Comparison with coincident satellite data showed a correlation with surface slope. An explanation for the behavior of the bias as a function of surface slope is given in terms of the pattern of surface roughness on the ice sheet.

  17. First Results from an Airborne Ka-band SAR Using SweepSAR and Digital Beamforming

    NASA Technical Reports Server (NTRS)

    Sadowy, Gregory; Ghaemi, Hirad; Hensley, Scott

    2012-01-01

    NASA/JPL has developed SweepSAR technique that breaks typical Synthetic Aperture Radar (SAR) trade space using time-dependent multi-beam DBF on receive. Developing SweepSAR implementation using array-fed reflector for proposed DESDynI Earth Radar Mission concept. Performed first-of-a-kind airborne demonstration of the SweepSAR concept at Ka-band (35.6 GHz). Validated calibration and antenna pattern data sufficient for beam forming in elevation. (1) Provides validation evidence that the proposed Deformation Ecosystem Structure Dynamics of Ice (DESDynI) SAR architecture is sound. (2) Functions well even with large variations in receiver gain / phase. Future plans include using prototype DESDynI SAR digital flight hardware to do the beam forming in real-time onboard the aircraft.

  18. High-end aerial digital cameras and their impact on the automation and quality of the production workflow

    NASA Astrophysics Data System (ADS)

    Paparoditis, Nicolas; Souchon, Jean-Philippe; Martinoty, Gilles; Pierrot-Deseilligny, Marc

    The IGN digital camera project was established in the early 1990s. The first research surveys were carried out in 1996 and the digital camera was first used in production in 2000. In 2004 approximately 10 French departments (accounting for 10% of the territory) were covered with a four-head camera system and since summer 2005 all IGN imagery has been acquired digitally. Nevertheless the camera system is still evolving, with tests on new geometric configurations being continuously undertaken. The progressive integration of the system in IGN production workflows has allowed IGN to keep the system evolving in accordance with production needs. Remaining problems are due to specific camera characteristics such as CCD format, the optical quality of off-the-shelf lenses, and because some production tools are ill-adapted to digital images with a large dynamic range. However, when considering the pros and cons of integrating these images into production lines, the disadvantages are largely balanced by the numerous benefits this technology offers.

  19. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  20. Combining multi-spectral proximal sensors and digital cameras for monitoring grazed tropical pastures

    NASA Astrophysics Data System (ADS)

    Handcock, R. N.; Gobbett, D. L.; González, L. A.; Bishop-Hurley, G. J.; McGavin, S. L.

    2015-11-01

    Timely and accurate monitoring of pasture biomass and ground-cover is necessary in livestock production systems to ensure productive and sustainable management of forage for livestock. Interest in the use of proximal sensors for monitoring pasture status in grazing systems has increased, since such sensors can return data in near real-time, and have the potential to be deployed on large properties where remote sensing may not be suitable due to issues such as spatial scale or cloud cover. However, there are unresolved challenges in developing calibrations to convert raw sensor data to quantitative biophysical values, such as pasture biomass or vegetation ground-cover, to allow meaningful interpretation of sensor data by livestock producers. We assessed the use of multiple proximal sensors for monitoring tropical pastures with a pilot deployment of sensors at two sites on Lansdown Research Station near Townsville, Australia. Each site was monitored by a Skye SKR-four-band multi-spectral sensor (every 1 min), a digital camera (every 30 min), and a soil moisture sensor (every 1 min), each operated over 18 months. Raw data from each sensor were processed to calculate a number of multispectral vegetation indices. Visual observations of pasture characteristics, including above-ground standing biomass and ground cover, were made every 2 weeks. A methodology was developed to manage the sensor deployment and the quality control of the data collected. The data capture from the digital cameras was more reliable than the multi-spectral sensors, which had up to 63 % of data discarded after data cleaning and quality control. We found a strong relationship between sensor and pasture measurements during the wet season period of maximum pasture growth (January to April), especially when data from the multi-spectral sensors were combined with weather data. RatioNS34 (a simple band ratio between the near infrared (NIR) and lower shortwave infrared (SWIR) bands) and rainfall since 1

  1. Nonlinear color-image decomposition for image processing of a digital color camera

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Aizawa, Haruya; Yamada, Daisuke; Komatsu, Takashi

    2009-01-01

    This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality color-image decomposition, which is very robust against colored random noise.

  2. Optical wide field monitor AROMA-W using multiple digital single-lens reflex cameras

    NASA Astrophysics Data System (ADS)

    Takahashi, Ichiro; Tsunashima, Kosuke; Tatsuhito, Takeda; Saori, Ono; Kazutaka, Yamaoka; Yoshida, Atsumasa

    2010-12-01

    We have developed and operated the automatic optical observation device Aoyama Gakuin University Robotic Optical Monitor for Astrophysical objects - Wide field (AROMA-W). It covers a large field of view of about 45 degrees W 30 degrees at a time by the multiple digital single-lens reflex cameras, and provides photometric data in four bands with a limiting V magnitude of about 12-13 magnitude (20 seconds, 3 sigma level). The automatic analysis pipeline which can analyze in parallel with observation has been constructed so far. It can draw the light curves of all stars in the field of view of AROMA-W. We are aiming at the simultaneous observation of the transients (e.g., X-ray nova, Supernova, GRB) that MAXI discovered by using the AROMA-W. We report the developmental status, the observational results of AROMA-W and a possibility of the simultaneous observation to the X-ray transients discovered with MAXI.

  3. Noctilucent clouds: modern ground-based photographic observations by a digital camera network.

    PubMed

    Dubietis, Audrius; Dalin, Peter; Balčiūnas, Ričardas; Černis, Kazimieras; Pertsev, Nikolay; Sukhodoev, Vladimir; Perminov, Vladimir; Zalcik, Mark; Zadorozhny, Alexander; Connors, Martin; Schofield, Ian; McEwan, Tom; McEachran, Iain; Frandsen, Soeren; Hansen, Ole; Andersen, Holger; Grønne, Jesper; Melnikov, Dmitry; Manevich, Alexander; Romejko, Vitaly

    2011-10-01

    Noctilucent, or "night-shining," clouds (NLCs) are a spectacular optical nighttime phenomenon that is very often neglected in the context of atmospheric optics. This paper gives a brief overview of current understanding of NLCs by providing a simple physical picture of their formation, relevant observational characteristics, and scientific challenges of NLC research. Modern ground-based photographic NLC observations, carried out in the framework of automated digital camera networks around the globe, are outlined. In particular, the obtained results refer to studies of single quasi-stationary waves in the NLC field. These waves exhibit specific propagation properties--high localization, robustness, and long lifetime--that are the essential requisites of solitary waves.

  4. Airborne Network Camera Standard

    DTIC Science & Technology

    2015-06-01

    primarily to cover terminology included in or consistent with the GigE Vision (GEV) and IRIG 106-13 Chapter 10 standards for command and control over a...cover terminology included in or consistent with the GigE Vision1 (GEV) and IRIG 106-13 Chapter 102 standards for command and control over a variety of... standard is primarily to cover terminology included in or consistent with the GEV standard and the IRIG 106 Chapter 10 standard document. RCC Document

  5. A Cryogenic, Insulating Suspension System for the High Resolution Airborne Wideband Camera (HAWC)and Submillemeter And Far Infrared Experiment (SAFIRE) Adiabatic Demagnetization Refrigerators (ADRs)

    NASA Technical Reports Server (NTRS)

    Voellmer, George M.; Jackson, Michael L.; Shirron, Peter J.; Tuttle, James G.

    2002-01-01

    The High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter And Far Infrared Experiment (SAFIRE) will use identical Adiabatic Demagnetization Refrigerators (ADR) to cool their detectors to 200mK and 100mK, respectively. In order to minimize thermal loads on the salt pill, a Kevlar suspension system is used to hold it in place. An innovative, kinematic suspension system is presented. The suspension system is unique in that it consists of two parts that can be assembled and tensioned offline, and later bolted onto the salt pill.

  6. Calibration of Low Cost Digital Camera Using Data from Simultaneous LIDAR and Photogrammetric Surveys

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Debiasi, P.; Hainosz, F.; Centeno, J.

    2012-07-01

    Digital photogrammetric products from the integration of imagery and lidar datasets are a reality nowadays. When the imagery and lidar surveys are performed together and the camera is connected to the lidar system, a direct georeferencing can be applied to compute the exterior orientation parameters of the images. Direct georeferencing of the images requires accurate interior orientation parameters to perform photogrammetric application. Camera calibration is a procedure applied to compute the interior orientation parameters (IOPs). Calibration researches have established that to obtain accurate IOPs, the calibration must be performed with same or equal condition that the photogrammetric survey is done. This paper shows the methodology and experiments results from in situ self-calibration using a simultaneous images block and lidar dataset. The calibration results are analyzed and discussed. To perform this research a test field was fixed in an urban area. A set of signalized points was implanted on the test field to use as the check points or control points. The photogrammetric images and lidar dataset of the test field were taken simultaneously. Four strips of flight were used to obtain a cross layout. The strips were taken with opposite directions of flight (W-E, E-W, N-S and S-N). The Kodak DSC Pro SLR/c digital camera was connected to the lidar system. The coordinates of the exposition station were computed from the lidar trajectory. Different layouts of vertical control points were used in the calibration experiments. The experiments use vertical coordinates from precise differential GPS survey or computed by an interpolation procedure using the lidar dataset. The positions of the exposition stations are used as control points in the calibration procedure to eliminate the linear dependency of the group of interior and exterior orientation parameters. This linear dependency happens, in the calibration procedure, when the vertical images and flat test field are

  7. Multi-ion detection by one-shot optical sensors using a colour digital photographic camera.

    PubMed

    Lapresta-Fernández, Alejandro; Capitán-Vallvey, Luis Fermín

    2011-10-07

    The feasibility and performance of a procedure to evaluate previously developed one-shot optical sensors as single and selective analyte sensors for potassium, magnesium and hardness are presented. The procedure uses a conventional colour digital photographic camera as the detection system for simultaneous multianalyte detection. A 6.0 megapixel camera was used, and the procedure describes how it is possible to quantify potassium, magnesium and hardness simultaneously from the images captured, using multianalyte one-shot sensors based on ionophore-chromoionophore chemistry, employing the colour information computed from a defined region of interest on the sensing membrane. One of the colour channels in the red, green, blue (RGB) colour space is used to build the analytical parameter, the effective degree of protonation (1-α(eff)), in good agreement with the theoretical model. The linearization of the sigmoidal response function increases the limit of detection (LOD) and analytical range in all cases studied. The increases were from 5.4 × 10(-6) to 2.7 × 10(-7) M for potassium, from 1.4 × 10(-4) to 2.0 × 10(-6) M for magnesium and from 1.7 to 2.0 × 10(-2) mg L(-1) of CaCO(3) for hardness. The method's precision was determined in terms of the relative standard deviation (RSD%) which was from 2.4 to 7.6 for potassium, from 6.8 to 7.8 for magnesium and from 4.3 to 7.8 for hardness. The procedure was applied to the simultaneous determination of potassium, magnesium and hardness using multianalyte one-shot sensors in different types of waters and beverages in order to cover the entire application range, statistically validating the results against atomic absorption spectrometry as the reference procedure. Accordingly, this paper is an attempt to demonstrate the possibility of using a conventional digital camera as an analytical device to measure this type of one-shot sensor based on ionophore-chromoionophore chemistry instead of using conventional lab

  8. [An applied research on effective health care planning using cellular phone with the digital still camera function].

    PubMed

    Yoshiyama, Naoki; Hashimoto, Akihiro; Nakajima, Kieko; Hattori, Shin; Sugita, Fukashi

    2004-12-01

    In order to make effective health care plans for elder home care patients, we have tried an easy communication tool between medical doctors and their patients. This tool is the cellular phone with digital still camera function (CP-DSC). We have achieved successful results using this type of technology.

  9. High-quality virus images obtained by transmission electron microscopy and charge coupled device digital camera technology.

    PubMed

    Tiekotter, Kenneth L; Ackermann, Hans-W

    2009-07-01

    The introduction of digital cameras has led to the publication of numerous virus electron micrographs of low magnification, poor contrast, and low resolution. Described herein is the methodology for obtaining highly contrasted virus images in the magnification range of approximately 250,000-300,000x. Based on recent advances in charged couple device (CCD) digital camera technology, methodology is described for optimal imaging parameters for using CCD cameras mounted in side- and bottom-mount position of electron microscopes and the recommendation of higher accelerating voltages, larger objective apertures, and small spot size. The authors are concerned with the principles of image formation and modulation, advocate a better use of imaging software to improve image quality, and recommend either pre- or post-acquisition adjustment for distributing pixel intensities of compressed histograms over the entire range of tonal values.

  10. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    NASA Astrophysics Data System (ADS)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  11. Digital camera measurements of soot temperature and soot volume fraction in axisymmetric flames.

    PubMed

    Guo, Haiqing; Castillo, Jose A; Sunderland, Peter B

    2013-11-20

    New diagnostics are presented that use a digital camera to measure full-field soot temperatures and soot volume fractions in axisymmetric flames. The camera is a Nikon D700 with 12 megapixels and 14 bit depth in each color plane, which was modified by removing the infrared and anti-aliasing filters. The diagnostics were calibrated with a blackbody furnace. The flame considered here was an 88 mm long ethylene/air co-flowing laminar jet diffusion flame on a round 11.1 mm burner. The resolution in the flame plane is estimated at between 0.1 and 0.7 mm. Soot temperatures were measured from soot radiative emissions, using ratio pyrometry at 450, 650, and 900 nm following deconvolution. These had a range of 1600-1850 K, a temporal resolution of 125 ms, and an estimated uncertainty of ±50  K. Soot volume fractions were measured two ways: from soot radiative emissions and from soot laser extinction at 632.8 nm, both following deconvolution. Soot volume fractions determined from emissions had a range of 0.1-10 ppm, temporal resolutions of 125 ms, and an estimated uncertainty of ±30%. Soot volume fractions determined from laser extinction had a range of 0.2-10 ppm, similar temporal resolutions, and an estimated uncertainty of ±10%. The present measurements agree with past measurements in this flame using traversing optics and probes; however, they avoid the long test times and other complications of such traditional methods.

  12. Evaluation of an airborne remote sensing platform consisting of two consumer-grade cameras for crop identification

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications have not been well documented in related ...

  13. Analysis of airborne LiDAR as a basis for digital soil mapping in Alpine areas

    NASA Astrophysics Data System (ADS)

    Kringer, K.; Tusch, M.; Geitner, C.; Meißl, G.; Rutzinger, M.

    2009-04-01

    Especially in mountainous regions like the Alps the formation of soil is highly influenced by relief characteristics. Among all factors included in Jenny's (1941) model for soil development, relief is the one most commonly used in approaches to create digital soil maps and to derive soil properties from secondary data sources (McBratney et al. 2003). Elevation data, first order (slope, aspect) and second order derivates (plan, profile and cross-sectional curvature) as well as complex morphometric parameters (various landform classifications, e.g., Wood 1996) and compound indices (e.g., topographic wetness indices, vertical distance to drainage network, insolation) can be calculated from digital elevation models (DEM). However, while being an important source of information for digital soil mapping on small map scales, "conventional" DEMs are of limited use for the design of large scale conceptual soil maps for small areas due to rather coarse raster resolutions with cell sizes ranging from 20 to 100 meters. Slight variations in elevation and small landform features might not be discernible even though they might have a significant effect to soil formation, e.g., regarding the influence of groundwater in alluvial soils or the extent of alluvial fans. Nowadays, Airborne LiDAR (Light Detection And Ranging) provides highly accurate data for the elaboration of high-resolution digital terrain models (DTM) even in forested areas. In the project LASBO (Laserscanning in der Bodenkartierung) the applicability of digital terrain models derived from LiDAR for the identification of soil-relevant geomorphometric parameter is investigated. Various algorithms which were initially designed for coarser raster data are applied on high-resolution DTMs. Test areas for LASBO are located in the region of Bruneck (Italy) and near the municipality of Kramsach in the Inn Valley (Austria). The freely available DTM for Bruneck has a raster resolution of 2.5 meters while in Kramsach a DTM with

  14. Quantifying the yellow signal driver behavior based on naturalistic data from digital enforcement cameras.

    PubMed

    Bar-Gera, H; Musicant, O; Schechtman, E; Ze'evi, T

    2016-11-01

    The yellow signal driver behavior, reflecting the dilemma zone behavior, is analyzed using naturalistic data from digital enforcement cameras. The key variable in the analysis is the entrance time after the yellow onset, and its distribution. This distribution can assist in determining two critical outcomes: the safety outcome related to red-light-running angle accidents, and the efficiency outcome. The connection to other approaches for evaluating the yellow signal driver behavior is also discussed. The dataset was obtained from 37 digital enforcement cameras at non-urban signalized intersections in Israel, over a period of nearly two years. The data contain more than 200 million vehicle entrances, of which 2.3% (∼5million vehicles) entered the intersection during the yellow phase. In all non-urban signalized intersections in Israel the green phase ends with 3s of flashing green, followed by 3s of yellow. In most non-urban signalized roads in Israel the posted speed limit is 90km/h. Our analysis focuses on crossings during the yellow phase and the first 1.5s of the red phase. The analysis method consists of two stages. In the first stage we tested whether the frequency of crossings is constant at the beginning of the yellow phase. We found that the pattern was stable (i.e., the frequencies were constant) at 18 intersections, nearly stable at 13 intersections and unstable at 6 intersections. In addition to the 6 intersections with unstable patterns, two other outlying intersections were excluded from subsequent analysis. Logistic regression models were fitted for each of the remaining 29 intersection. We examined both standard (exponential) logistic regression and four parameters logistic regression. The results show a clear advantage for the former. The estimated parameters show that the time when the frequency of crossing reduces to half ranges from1.7 to 2.3s after yellow onset. The duration of the reduction of the relative frequency from 0.9 to 0.1 ranged

  15. Space-bandwidth extension in parallel phase-shifting digital holography using a four-channel polarization-imaging camera.

    PubMed

    Tahara, Tatsuki; Ito, Yasunori; Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2013-07-15

    We propose a method for extending the space bandwidth (SBW) available for recording an object wave in parallel phase-shifting digital holography using a four-channel polarization-imaging camera. A linear spatial carrier of the reference wave is introduced to an optical setup of parallel four-step phase-shifting interferometry using a commercially available polarization-imaging camera that has four polarization-detection channels. Then a hologram required for parallel two-step phase shifting, which is a technique capable of recording the widest SBW in parallel phase shifting, can be obtained. The effectiveness of the proposed method was numerically and experimentally verified.

  16. A New Lunar Digital Elevation Model from the Lunar Orbiter Laser Altimeter and SELENE Terrain Camera

    NASA Technical Reports Server (NTRS)

    Barker, M. K.; Mazarico, E.; Neumann, G. A.; Zuber, M. T.; Haruyama, J.; Smith, D. E.

    2015-01-01

    We present an improved lunar digital elevation model (DEM) covering latitudes within +/-60 deg, at a horizontal resolution of 512 pixels per degree ( approx.60 m at the equator) and a typical vertical accuracy approx.3 to 4 m. This DEM is constructed from approx.4.5 ×10(exp 9) geodetically-accurate topographic heights from the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar Reconnaissance Orbiter, to which we co-registered 43,200 stereo-derived DEMs (each 1 deg×1 deg) from the SELENE Terrain Camera (TC) ( approx.10(exp 10) pixels total). After co-registration, approximately 90% of the TC DEMs show root-mean-square vertical residuals with the LOLA data of < 5 m compared to approx.50% prior to co-registration. We use the co-registered TC data to estimate and correct orbital and pointing geolocation errors from the LOLA altimetric profiles (typically amounting to < 10 m horizontally and < 1 m vertically). By combining both co-registered datasets, we obtain a near-global DEM with high geodetic accuracy, and without the need for surface interpolation. We evaluate the resulting LOLA + TC merged DEM (designated as "SLDEM2015") with particular attention to quantifying seams and crossover errors.

  17. New long-zoom lens for 4K super 35mm digital cameras

    NASA Astrophysics Data System (ADS)

    Thorpe, Laurence J.; Usui, Fumiaki; Kamata, Ryuhei

    2015-05-01

    The world of television production is beginning to adopt 4K Super 35 mm (S35) image capture for a widening range of program genres that seek both the unique imaging properties of that large image format and the protection of their program assets in a world anticipating future 4K services. Documentary and natural history production in particular are transitioning to this form of production. The nature of their shooting demands long zoom lenses. In their traditional world of 2/3-inch digital HDTV cameras they have a broad choice in portable lenses - with zoom ranges as high as 40:1. In the world of Super 35mm the longest zoom lens is limited to 12:1 offering a telephoto of 400mm. Canon was requested to consider a significantly longer focal range lens while severely curtailing its size and weight. Extensive computer simulation explored countless combinations of optical and optomechanical systems in a quest to ensure that all operational requests and full 4K performance could be met. The final lens design is anticipated to have applications beyond entertainment production, including a variety of security systems.

  18. 3D Reconstruction of Static Human Body with a Digital Camera

    NASA Astrophysics Data System (ADS)

    Remondino, Fabio

    2003-01-01

    Nowadays the interest in 3D reconstruction and modeling of real humans is one of the most challenging problems and a topic of great interest. The human models are used for movies, video games or ergonomics applications and they are usually created with 3D scanner devices. In this paper a new method to reconstruct the shape of a static human is presented. Our approach is based on photogrammetric techniques and uses a sequence of images acquired around a standing person with a digital still video camera or with a camcorder. First the images are calibrated and orientated using a bundle adjustment. After the establishment of a stable adjusted image block, an image matching process is performed between consecutive triplets of images. Finally the 3D coordinates of the matched points are computed with a mean accuracy of ca 2 mm by forward ray intersection. The obtained point cloud can then be triangulated to generate a surface model of the body or a virtual human model can be fitted to the recovered 3D data. Results of the 3D human point cloud with pixel color information are presented.

  19. Comparison of Floc Sizes From a LISST-100 and a Digital Floc Camera

    NASA Astrophysics Data System (ADS)

    Mikkelsen, O. A.; Chant, R.; Hill, P. S.; Milligan, T. G.

    2003-12-01

    A LISST-100 in situ laser particle sizer was deployed together with a digital floc camera (DFC) in estuarine and continental shelf settings on several occasions. The LISST-100 can measure particle sizes in the range 2.5-500μ m, whereas the DFC can detect particles larger than 125μ m; the two instruments thus overlap in the 125-500μ m size range. The particle sizes from the two instruments are compared. At a first glance, the LISST-100 underestimates the floc median diameter (D50) by a factor of 2-4 when compared to the DFC. However, when the data from the two instruments are "trimmed", so that only particles in the overlapping size ranges are considered, the two instruments yield nearly identical values for D50. Thus, the LISST-100 and the DFC both appear to detect particles in their overlapping size ranges correctly. Since the DFC only detects particles larger than 125μ m, it will usually overestimate D50, whereas the LISST-100 will usually underestimate D50. Using the two instruments in conjunction, floc size spectra from 2.5μ m and up can be measured, hence better estimates of the D50 can be obtained.

  20. Digital chemiluminescence imaging of DNA sequencing blots using a charge-coupled device camera.

    PubMed Central

    Karger, A E; Weiss, R; Gesteland, R F

    1992-01-01

    Digital chemiluminescence imaging with a cryogenically cooled charge-coupled device (CCD) camera is used to visualize DNA sequencing fragments covalently bound to a blotting membrane. The detection is based on DNA hybridization with an alkaline phosphatase(AP) labeled oligodeoxyribonucleotide probe and AP triggered chemiluminescence of the substrate 3-(2'-spiro-adamantane)-4-methoxy-4-(3"-phosphoryloxy)phenyl- 1,2-dioxetane (AMPPD). The detection using a direct AP-oligonucleotide conjugate is compared to the secondary detection of biotinylated oligonucleotides with respect to their sensitivity and nonspecific binding to the nylon membrane by quantitative imaging. Using the direct oligonucleotide-AP conjugate as a hybridization probe, sub-attomol (0.5 pg of 2.7 kb pUC plasmid DNA) quantities of membrane bound DNA are detectable with 30 min CCD exposures. Detection using the biotinylated probe in combination with streptavidin-AP was found to be background limited by nonspecific binding of streptavidin-AP and the oligo(biotin-11-dUTP) label in equal proportions. In contrast, the nonspecific background of AP-labeled oligonucleotide is indistinguishable from that seen with 5'-32P-label, in that respect making AP an ideal enzymatic label. The effect of hybridization time, probe concentration, and presence of luminescence enhancers on the detection of plasmid DNA were investigated. Images PMID:1480487

  1. A new lunar digital elevation model from the Lunar Orbiter Laser Altimeter and SELENE Terrain Camera

    NASA Astrophysics Data System (ADS)

    Barker, M. K.; Mazarico, E.; Neumann, G. A.; Zuber, M. T.; Haruyama, J.; Smith, D. E.

    2016-07-01

    We present an improved lunar digital elevation model (DEM) covering latitudes within ±60°, at a horizontal resolution of 512 pixels per degree (∼60 m at the equator) and a typical vertical accuracy ∼3 to 4 m. This DEM is constructed from ∼ 4.5 ×109 geodetically-accurate topographic heights from the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar Reconnaissance Orbiter, to which we co-registered 43,200 stereo-derived DEMs (each 1° × 1°) from the SELENE Terrain Camera (TC) (∼1010 pixels total). After co-registration, approximately 90% of the TC DEMs show root-mean-square vertical residuals with the LOLA data of <5 m compared to ∼ 50% prior to co-registration. We use the co-registered TC data to estimate and correct orbital and pointing geolocation errors from the LOLA altimetric profiles (typically amounting to <10 m horizontally and <1 m vertically). By combining both co-registered datasets, we obtain a near-global DEM with high geodetic accuracy, and without the need for surface interpolation. We evaluate the resulting LOLA + TC merged DEM (designated as "SLDEM2015") with particular attention to quantifying seams and crossover errors.

  2. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    NASA Astrophysics Data System (ADS)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  3. Zoom lens design for 10.2-megapixel APS-C digital SLR cameras.

    PubMed

    Sun, Wen-Shing; Chu, Pu-Yi; Tien, Chuen-Lin; Chung, Meng-Feng

    2017-01-20

    A zoom lens design for a 10.2-megapixel digital single-lens reflex (SLR) camera with an advanced photo system type-C (APS-C) CCD image sensor is presented. The proposed zoom lens design consists of four groups of 3× zoom lenses with a focal length range of 17-51 mm. In the optimization process, 107 kinds of Schott glass combined with 26 kinds of plastic materials, as listed in Code V, are used. The best combination of glass and plastic materials is found based on the nd-Vd diagram. The modulation transfer function (MTF) was greater than 0.509 at 42  lp/mm, the lateral chromatic aberration was less than 5 μm, the optical distortion was less than 1.97%, and the relative illumination was greater than 80.05%. We also performed the tolerance analysis with the 2σ (97.7%) position selected and given tolerance tables and results for three zooming positions, which made the design more practical for manufacturing.

  4. Performance analysis of digital cameras versus chromatic white light (CWL) sensors for the localization of latent fingerprints in crime scenes

    NASA Astrophysics Data System (ADS)

    Jankow, Mathias; Hildebrandt, Mario; Sturm, Jennifer; Kiltz, Stefan; Vielhauer, Claus

    2012-06-01

    In future applications of contactless acquisition techniques for latent fingerprints the automatic localization of potential fingerprint traces in crime scenes is required. Our goal is to study the application of a camera-based approach1 comparing with the performance of chromatic white light (CWL) techniques2 for the latent fingerprint localization in coarse and the resulting acquisition using detailed scans. Furthermore, we briefly evaluate the suitability of the camera-based acquisition for the detection of malicious fingerprint traces using an extended camera setup in comparison to Kiltz et al.3 Our experimental setup includes a Canon EOS 550D4 digital single-lens reflex (DSLR) camera and a FRT MicroProf2005 surface measurement device with CWL6002 sensor. We apply at least two fingerprints to each surface in our test set with 8 different either smooth, textured and structured surfaces to evaluate the detection performance of the two localization techniques using different pre-processing and feature extraction techniques. Printed fingerprint patterns as reproducible but potentially malicious traces3 are additionally acquired and analyzed on foil and compact discs. Our results indicate positive tendency towards a fast localization using the camera-based technique. All fingerprints that are located using the CWL sensor are found using the camera. However,the disadvantage of the camera-based technique is that the size of the region of interest for the detailed scan for each potential latent fingerprint is usually slightly larger compared to the CWL-based localization. Furthermore, this technique does not acquire 3D data and the resulting images are distorted due to the necessary angle between the camera and the surface. When applying the camera-based approach, it is required to optimize the feature extraction and classification. Furthermore, the required acquisition time for each potential fingerprint needs to be estimated to determine the time-savings of the

  5. Snow melt and phenology of a subalpine grassland: analysis through the use of digital camera images.

    NASA Astrophysics Data System (ADS)

    Julitta, Tommaso; Cremonese, Edoardo; Colombo, Roberto; Rossini, Micol; Fava, Franceso; Cogliati, Sergio; Galvagno, Marta; Panigada, Cinzia; Siniscalco, Consolata; Morra di Cella, Umberto; Migliavacca, Mirco

    2013-04-01

    Plant phenology is a good indicator of the impact of climate change on ecosystems. On mountain systems the main environmental constraints governing phenological timing are air and soil temperature, photoperiod and presence of snow. Recent studies showed the potentiality of using automated repeat digital photography for monitoring vegetative phenological events. In the present study, digital images collected with a CC640 Campbell Scientific Camera over 3 years (2009, 2010, 2011) in a subalpine grassland were used to analyse the spatial patterns of phenological events and their relationship with the timing of snowmelt. Yearly time series of green chromatic coordinates (gcc) were computed from hourly images. In order to analyse the spatial pattern of phenological metrics, gcc time series for each 10x10 pixel region of the target ecosystem were computed and the start of the season for the 10x10 regions was extracted. Based on the same grid dimension a snowmelt date map corresponding to the day of the year in which the snow disappears from the ground was obtained. Our main result showed that despite the snowmelt occurs rapidly, as maximum in seven days, several distinct spatial patterns were identified. The comparison of spatial patterns of snowmelt and phenological dynamics led to quite unexpected results. In fact, a negative correlation was found between the two variables, meaning that the growing season begins later in convex areas characterized by an early snowmelt, and vice versa in concave areas. A detailed field vegetational analysis revealed that these patterns were related to different plant communities. In particular differences in terms of species abundance seem to be related to convex and concave areas, mainly covered by grasses and by forbs respectively suggesting that different patterns of snow accumulation and of water availability during the growing season due to micromorphology affect the vegetation community and so indirectly phenology. These

  6. Three-dimensional displacement measurement for diffuse object using phase-shifting digital holography with polarization imaging camera.

    PubMed

    Kiire, Tomohiro; Nakadate, Suezou; Shibuya, Masato; Yatagai, Toyohiko

    2011-12-01

    The amount of displacement of a diffused object can be measured using phase-shifting digital holography with a polarization imaging camera. Four digital holograms in quadrature are extracted from the polarization imaging camera and used to calculate the phase hologram. Two Fourier transforms of the phase holograms are calculated before and after the displacement of the object. A phase slope is subsequently obtained from the phase distribution of division between the two Fourier transforms. The slope of the phase distribution is proportional to the lateral displacement of the object. The sensitivity is less than one pixel size in the lateral direction of the movement. The longitudinal component of the displacement can be also measured separately from the intercept on the phase axis along the phase distribution of the division between two Fourier transforms of the phase holograms.

  7. Benchmarking of depth of field for large out-of-plane deformations with single camera digital image correlation

    NASA Astrophysics Data System (ADS)

    Van Mieghem, Bart; Ivens, Jan; Van Bael, Albert

    2017-04-01

    A problem that arises when performing stereo digital image correlation in applications with large out-of-plane displacements is that the images may become unfocused. This unfocusing could result in correlation instabilities or inaccuracies. When performing DIC measurements and expecting large out-of-plane displacements researchers either trust on their experience or use the equations from photography to estimate the parameters affecting the depth of field (DOF) of the camera. A limitation of the latter approach is that the definition of sharpness is a human defined parameter and that it does not reflect the performance of the digital image correlation system. To get a more representative DOF value for DIC applications, a standardised testing method is presented here, making use of real camera and lens combinations as well as actual image correlation results. The method is based on experimental single camera DIC measurements of a backwards moving target. Correlation results from focused and unfocused images are compared and a threshold value defines whether or not the correlation results are acceptable even if the images are (slightly) unfocused. By following the proposed approach, the complete DOF of a specific camera/lens combination as function of the aperture setting and distance from the camera to the target can be defined. The comparison between the theoretical and the experimental DOF results shows that the achievable DOF for DIC applications is larger than what theoretical calculations predict. Practically this means that the cameras can be positioned closer to the target than what is expected from the theoretical approach. This leads to a gain in resolution and measurement accuracy.

  8. Digital X-ray camera for quality evaluation three-dimensional topographic reconstruction of single crystals of biological macromolecules

    NASA Technical Reports Server (NTRS)

    Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)

    2008-01-01

    The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.

  9. Two Years of Digital Terrain Model Production Using the Lunar Reconnaissance Orbiter Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Burns, K.; Robinson, M. S.; Speyerer, E.; LROC Science Team

    2011-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) is to gather stereo observations with the Narrow Angle Camera (NAC). These stereo observations are used to generate digital terrain models (DTMs). The NAC has a pixel scale of 0.5 to 2.0 meters but was not designed for stereo observations and thus requires the spacecraft to roll off-nadir to acquire these images. Slews interfere with the data collection of the other instruments, so opportunities are currently limited to four per day. Arizona State University has produced DTMs from 95 stereo pairs for 11 Constellation Project (CxP) sites (Aristarchus, Copernicus crater, Gruithuisen domes, Hortensius domes, Ina D-caldera, Lichtenberg crater, Mare Ingenii, Marius hills, Reiner Gamma, South Pole-Aitkin Rim, Sulpicius Gallus) as well as 30 other regions of scientific interest (including: Bhabha crater, highest and lowest elevation points, Highland Ponds, Kugler Anuchin, Linne Crater, Planck Crater, Slipher crater, Sears Crater, Mandel'shtam Crater, Virtanen Graben, Compton/Belkovich, Rumker Domes, King Crater, Luna 16/20/23/24 landing sites, Ranger 6 landing site, Wiener F Crater, Apollo 11/14/15/17, fresh craters, impact melt flows, Larmor Q crater, Mare Tranquillitatis pit, Hansteen Alpha, Moore F Crater, and Lassell Massif). To generate DTMs, the USGS ISIS software and SOCET SET° from BAE Systems are used. To increase the absolute accuracy of the DTMs, data obtained from the Lunar Orbiter Laser Altimeter (LOLA) is used to coregister the NAC images and define the geodetic reference frame. NAC DTMs have been used in examination of several sites, e.g. Compton-Belkovich, Marius Hills and Ina D-caldera [1-3]. LROC will continue to acquire high-resolution stereo images throughout the science phase of the mission and any extended mission opportunities, thus providing a vital dataset for scientific research as well as future human and robotic exploration. [1] B.L. Jolliff (2011) Nature

  10. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  11. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  12. Deformation monitoring with off-the-shelf digital cameras for civil engineering fatigue testing

    NASA Astrophysics Data System (ADS)

    Detchev, I.; Habib, A.; He, F.; El-Badry, M.

    2014-06-01

    Deformation monitoring of civil infrastructure systems is important in terms of both their safety and serviceability. The former refers to estimating the maximum loading capacity during the design stages of a building project, and the latter means performing regularly scheduled maintenance of an already existing structure. Traditionally, large structures have been monitored using surveying techniques, while fine-scale monitoring of structural components such as beams and trusses has been done with strain gauge instrumentation. In the past decade, digital photogrammetric systems coupled with image processing techniques have also been used for deformation monitoring. The major advantage of this remote sensing method for performing deformation monitoring is that there is no need to access the object of interest while testing is in progress. The paper is a result of an experiment where concrete beams with polymer support sheets are subjected to dynamic loading conditions by a hydraulic actuator in a structures laboratory. This type of loading is also known as fatigue testing, and is used to simulate the typical use of concrete beams over a long period of time. From a photogrammetric point of view, the challenge for this type of experiment is to avoid motion artifacts by maximizing the sensor frame rate, and at the same time to have a good enough image quality in order to achieve satisfactory reconstruction precision. This research effort will investigate the optimal camera settings (e.g., aperture, shutter speed, sensor sensitivity, and file size resolution) in order to have a balance between high sensor frame rate and good image quality. The results will be first evaluated in terms of their repeatability, and then also in terms of their accuracy. The accuracy of the results will be checked against another set of results coming from high quality laser transducers.

  13. Design and Fabrication of Two-Dimensional Semiconducting Bolometer Arrays for the High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC-II)

    NASA Technical Reports Server (NTRS)

    Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. Darren; Harper, D. Al; Jhabvala, Murzy D.; Simpson, A. D. (Technical Monitor)

    2002-01-01

    The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC 11) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC "Pop-Up" Detectors (PUD's) use a unique folding technique to enable a 12 x 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar(Registered Trademark) suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 x 32-element array. Engineering results from the first light run of SHARC II at the CalTech Submillimeter Observatory (CSO) are presented.

  14. Design and Fabrication of Two-Dimensional Semiconducting Bolometer Arrays for the High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC-II)

    NASA Technical Reports Server (NTRS)

    Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. Darren; Harper, D. Al; Jhabvala, Murzy D.

    2002-01-01

    The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC II) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC 'Pop-up' Detectors (PUD's) use a unique folding technique to enable a 12 x 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar(trademark) suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 x 32-element array. Engineering results from the first light run of SHARC II at the Caltech Submillimeter Observatory (CSO) are presented.

  15. A pilot project combining multispectral proximal sensors and digital cameras for monitoring tropical pastures

    NASA Astrophysics Data System (ADS)

    Handcock, Rebecca N.; Gobbett, D. L.; González, Luciano A.; Bishop-Hurley, Greg J.; McGavin, Sharon L.

    2016-08-01

    Timely and accurate monitoring of pasture biomass and ground cover is necessary in livestock production systems to ensure productive and sustainable management. Interest in the use of proximal sensors for monitoring pasture status in grazing systems has increased, since data can be returned in near real time. Proximal sensors have the potential for deployment on large properties where remote sensing may not be suitable due to issues such as spatial scale or cloud cover. There are unresolved challenges in gathering reliable sensor data and in calibrating raw sensor data to values such as pasture biomass or vegetation ground cover, which allow meaningful interpretation of sensor data by livestock producers. Our goal was to assess whether a combination of proximal sensors could be reliably deployed to monitor tropical pasture status in an operational beef production system, as a precursor to designing a full sensor deployment. We use this pilot project to (1) illustrate practical issues around sensor deployment, (2) develop the methods necessary for the quality control of the sensor data, and (3) assess the strength of the relationships between vegetation indices derived from the proximal sensors and field observations across the wet and dry seasons. Proximal sensors were deployed at two sites in a tropical pasture on a beef production property near Townsville, Australia. Each site was monitored by a Skye SKR-four-band multispectral sensor (every 1 min), a digital camera (every 30 min), and a soil moisture sensor (every 1 min), each of which were operated over 18 months. Raw data from each sensor was processed to calculate multispectral vegetation indices. The data capture from the digital cameras was more reliable than the multispectral sensors, which had up to 67 % of data discarded after data cleaning and quality control for technical issues related to the sensor design, as well as environmental issues such as water incursion and insect infestations. We recommend

  16. Extraction of Urban Trees from Integrated Airborne Based Digital Image and LIDAR Point Cloud Datasets - Initial Results

    NASA Astrophysics Data System (ADS)

    Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-10-01

    Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.

  17. Comparison of - and Mutual Informaton Based Calibration of Terrestrial Laser Scanner and Digital Camera for Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Omidalizarandi, M.; Neumann, I.

    2015-12-01

    In the current state-of-the-art, geodetic deformation analysis of natural and artificial objects (e.g. dams, bridges,...) is an ongoing research in both static and kinematic mode and has received considerable interest by researchers and geodetic engineers. In this work, due to increasing the accuracy of geodetic deformation analysis, a terrestrial laser scanner (TLS; here the Zoller+Fröhlich IMAGER 5006) and a high resolution digital camera (Nikon D750) are integrated to complementarily benefit from each other. In order to optimally combine the acquired data of the hybrid sensor system, a highly accurate estimation of the extrinsic calibration parameters between TLS and digital camera is a vital preliminary step. Thus, the calibration of the aforementioned hybrid sensor system can be separated into three single calibrations: calibration of the camera, calibration of the TLS and extrinsic calibration between TLS and digital camera. In this research, we focus on highly accurate estimating extrinsic parameters between fused sensors and target- and targetless (mutual information) based methods are applied. In target-based calibration, different types of observations (image coordinates, TLS measurements and laser tracker measurements for validation) are utilized and variance component estimation is applied to optimally assign adequate weights to the observations. Space resection bundle adjustment based on the collinearity equations is solved using Gauss-Markov and Gauss-Helmert model. Statistical tests are performed to discard outliers and large residuals in the adjustment procedure. At the end, the two aforementioned approaches are compared and advantages and disadvantages of them are investigated and numerical results are presented and discussed.

  18. Measurement of the spatial frequency response (SFR) of digital still-picture cameras using a modified slanted-edge method

    NASA Astrophysics Data System (ADS)

    Hsu, Wei-Feng; Hsu, Yun C.; Chuang, Kai W.

    2000-06-01

    Spatial resolution is one of the main characteristics of electronic imaging devices such as the digital still-picture camera. It describes the capability of a device to resolve the spatial details of an image formed by the incoming optical information. The overall resolving capability is of great interest although there are various factors, contributed by camera components and signal processing algorithms, affecting the spatial resolution. The spatial frequency response (SFR), analogous to the MTF of an optical imaging system, is one of the four measurements for analysis of spatial resolution defined in ISO/FDIS 12233, and it provides a complete profile of the spatial response of digital still-picture cameras. In that document, a test chart is employed to estimate the spatial resolving capability. The calculations of SFR were conducted by using the slanted edge method in which a scene with a black-to- white or white-to-black edge tilted at a specified angle is captured. An algorithm is used to find the line spread function as well as the SFR. We will present a modified algorithm in which no prior information of the angle of the tilted black-to-white edge is needed. The tilted angle was estimated by assuming that a region around the center of the transition between black and white regions is linear. At a tilted angle of 8 degree the minimum estimation error is about 3%. The advantages of the modified slanted edge method are high accuracy, flexible use, and low cost.

  19. Small Field of View Scintimammography Gamma Camera Integrated to a Stereotactic Core Biopsy Digital X-ray System

    SciTech Connect

    Andrew Weisenberger; Fernando Barbosa; T. D. Green; R. Hoefer; Cynthia Keppel; Brian Kross; Stanislaw Majewski; Vladimir Popov; Randolph Wojcik

    2002-10-01

    A small field of view gamma camera has been developed for integration with a commercial stereotactic core biopsy system. The goal is to develop and implement a dual-modality imaging system utilizing scintimammography and digital radiography to evaluate the reliability of scintimammography in predicting the malignancy of suspected breast lesions from conventional X-ray mammography. The scintimammography gamma camera is a custom-built mini gamma camera with an active area of 5.3 cm /spl times/ 5.3 cm and is based on a 2 /spl times/ 2 array of Hamamatsu R7600-C8 position-sensitive photomultiplier tubes. The spatial resolution of the gamma camera at the collimator surface is < 4 mm full-width at half-maximum and a sensitivity of /spl sim/ 4000 Hz/mCi. The system is also capable of acquiring dynamic scintimammographic data to allow for dynamic uptake studies. Sample images of preliminary clinical results are presented to demonstrate the performance of the system.

  20. Diabetic Retinopathy Screening Ratio Is Improved When Using a Digital, Nonmydriatic Fundus Camera Onsite in a Diabetes Outpatient Clinic

    PubMed Central

    Roser, Pia; Kalscheuer, Hannes; Groener, Jan B.; Lehnhoff, Daniel; Klein, Roman; Auffarth, Gerd U.; Nawroth, Peter P.; Schuett, Florian; Rudofsky, Gottfried

    2016-01-01

    Objective. To evaluate the effect of onsite screening with a nonmydriatic, digital fundus camera for diabetic retinopathy (DR) at a diabetes outpatient clinic. Research Design and Methods. This cross-sectional study included 502 patients, 112 with type 1 and 390 with type 2 diabetes. Patients attended screenings for microvascular complications, including diabetic nephropathy (DN), diabetic polyneuropathy (DP), and DR. Single-field retinal imaging with a digital, nonmydriatic fundus camera was used to assess DR. Prevalence and incidence of microvascular complications were analyzed and the ratio of newly diagnosed to preexisting complications for all entities was calculated in order to differentiate natural progress from missed DRs. Results. For both types of diabetes, prevalence of DR was 25.0% (n = 126) and incidence 6.4% (n = 32) (T1DM versus T2DM: prevalence: 35.7% versus 22.1%, incidence 5.4% versus 6.7%). 25.4% of all DRs were newly diagnosed. Furthermore, the ratio of newly diagnosed to preexisting DR was higher than those for DN (p = 0.12) and DP (p = 0.03) representing at least 13 patients with missed DR. Conclusions. The results indicate that implementing nonmydriatic, digital fundus imaging in a diabetes outpatient clinic can contribute to improved early diagnosis of diabetic retinopathy. PMID:26904690

  1. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

    PubMed Central

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.

    2014-01-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  2. Photometric-based recovery of illuminant-free color images using a red-green-blue digital camera

    NASA Astrophysics Data System (ADS)

    Luis Nieves, Juan; Plata, Clara; Valero, Eva M.; Romero, Javier

    2012-01-01

    Albedo estimation has traditionally been used to make computational simulations of real objects under different conditions, but as yet no device is capable of measuring albedo directly. The aim of this work is to introduce a photometric-based color imaging framework that can estimate albedo and can reproduce the appearance both indoors and outdoors of images under different lights and illumination geometry. Using a calibration sample set composed of chips made of the same material but different colors and textures, we compare two photometric-stereo techniques, one of them avoiding the effect of shadows and highlights in the image and the other ignoring this constraint. We combined a photometric-stereo technique and a color-estimation algorithm that directly relates the camera sensor outputs with the albedo values. The proposed method can produce illuminant-free images with good color accuracy when a three-channel red-green-blue (RGB) digital camera is used, even outdoors under solar illumination.

  3. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration.

    PubMed

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A; Allen, Justine J; Demirci, Utkan; Hanlon, Roger T

    2014-02-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging.

  4. Evaluation of a novel laparoscopic camera for characterization of renal ischemia in a porcine model using digital light processing (DLP) hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.

    2012-03-01

    Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.

  5. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  6. Reading Out Single-Molecule Digital RNA and DNA Isothermal Amplification in Nanoliter Volumes with Unmodified Camera Phones.

    PubMed

    Rodriguez-Manzano, Jesus; Karymov, Mikhail A; Begolo, Stefano; Selck, David A; Zhukov, Dmitriy V; Jue, Erik; Ismagilov, Rustem F

    2016-03-22

    Digital single-molecule technologies are expanding diagnostic capabilities, enabling the ultrasensitive quantification of targets, such as viral load in HIV and hepatitis C infections, by directly counting single molecules. Replacing fluorescent readout with a robust visual readout that can be captured by any unmodified cell phone camera will facilitate the global distribution of diagnostic tests, including in limited-resource settings where the need is greatest. This paper describes a methodology for developing a visual readout system for digital single-molecule amplification of RNA and DNA by (i) selecting colorimetric amplification-indicator dyes that are compatible with the spectral sensitivity of standard mobile phones, and (ii) identifying an optimal ratiometric image-process for a selected dye to achieve a readout that is robust to lighting conditions and camera hardware and provides unambiguous quantitative results, even for colorblind users. We also include an analysis of the limitations of this methodology, and provide a microfluidic approach that can be applied to expand dynamic range and improve reaction performance, allowing ultrasensitive, quantitative measurements at volumes as low as 5 nL. We validate this methodology using SlipChip-based digital single-molecule isothermal amplification with λDNA as a model and hepatitis C viral RNA as a clinically relevant target. The innovative combination of isothermal amplification chemistry in the presence of a judiciously chosen indicator dye and ratiometric image processing with SlipChip technology allowed the sequence-specific visual readout of single nucleic acid molecules in nanoliter volumes with an unmodified cell phone camera. When paired with devices that integrate sample preparation and nucleic acid amplification, this hardware-agnostic approach will increase the affordability and the distribution of quantitative diagnostic and environmental tests.

  7. Reading Out Single-Molecule Digital RNA and DNA Isothermal Amplification in Nanoliter Volumes with Unmodified Camera Phones

    PubMed Central

    2016-01-01

    Digital single-molecule technologies are expanding diagnostic capabilities, enabling the ultrasensitive quantification of targets, such as viral load in HIV and hepatitis C infections, by directly counting single molecules. Replacing fluorescent readout with a robust visual readout that can be captured by any unmodified cell phone camera will facilitate the global distribution of diagnostic tests, including in limited-resource settings where the need is greatest. This paper describes a methodology for developing a visual readout system for digital single-molecule amplification of RNA and DNA by (i) selecting colorimetric amplification-indicator dyes that are compatible with the spectral sensitivity of standard mobile phones, and (ii) identifying an optimal ratiometric image-process for a selected dye to achieve a readout that is robust to lighting conditions and camera hardware and provides unambiguous quantitative results, even for colorblind users. We also include an analysis of the limitations of this methodology, and provide a microfluidic approach that can be applied to expand dynamic range and improve reaction performance, allowing ultrasensitive, quantitative measurements at volumes as low as 5 nL. We validate this methodology using SlipChip-based digital single-molecule isothermal amplification with λDNA as a model and hepatitis C viral RNA as a clinically relevant target. The innovative combination of isothermal amplification chemistry in the presence of a judiciously chosen indicator dye and ratiometric image processing with SlipChip technology allowed the sequence-specific visual readout of single nucleic acid molecules in nanoliter volumes with an unmodified cell phone camera. When paired with devices that integrate sample preparation and nucleic acid amplification, this hardware-agnostic approach will increase the affordability and the distribution of quantitative diagnostic and environmental tests. PMID:26900709

  8. Combining digital image correlation and projected fringe techniques on a multi-camera multi-projector platform

    NASA Astrophysics Data System (ADS)

    Nguyen, T. N.; Huntley, J. M.; Burguete, R.; Coggrave, C. R.

    2009-08-01

    This paper presents how a shape measurement system (SMS) based on projected fringes can be combined with a 2-D digital image correlation (DIC) technique to accurately measure surface profile and 3-D displacement fields at the same time. Unlike traditional 3-D DIC techniques, the proposed method can measure discontinuous surfaces as easily as smooth ones. The method can also be extended to a multi-camera multi-projector system, and thus complete 360° 3-D displacement fields can be obtained within a single global coordinate system. Details of the algorithm are presented together with experimental results.

  9. First Results from an Airborne Ka-Band SAR Using SweepSAR and Digital Beamforming

    NASA Technical Reports Server (NTRS)

    Sadowy, Gregory A.; Ghaemi, Hirad; Hensley, Scott C.

    2012-01-01

    SweepSAR is a wide-swath synthetic aperture radar technique that is being studied for application on the future Earth science radar missions. This paper describes the design of an airborne radar demonstration that simulates an 11-m L-band (1.2-1.3 GHz) reflector geometry at Ka-band (35.6 GHz) using a 40-cm reflector. The Ka-band SweepSAR Demonstration system was flown on the NASA DC-8 airborne laboratory and used to study engineering performance trades and array calibration for SweepSAR configurations. We present an instrument and experiment overview, instrument calibration and first results.

  10. Dgnss/ins Van Project For Road Survey In The Cei Countries: The Problem of Digital Cameras Calibration

    NASA Astrophysics Data System (ADS)

    Deruda, G.; Falchi, E.; Sanna, G.; Vacca, G.

    In order to assess the influence of distortion of objective lens on digital photocameras and videocameras a series of experiments, using a digital photocamera by Nikon, a videocamera by Samsung and a webcam by Creative have been performed with the aim to test the possibility to enhance camera images by means of resampling tech- niques. For this purpose a network of fiducial points has been materialized on two walls of a building in the Faculty of Engineering of Cagliari. Points coordinate have been obtained by means of a topographic survey. Images and video sequences of the fronts have been taken at several distances and different focal lens, obtaining an esti- mate of the lens behaviour, on the basis of witch a regular grid of the displacement of points on the photo has been generated for each camera. The grid has been used in a resampling procedure to remove distortion influence by the images. The improvement of accuracy has been estimated between about 30 and 50%.

  11. Trend of digital camera and interchangeable zoom lenses with high ratio based on patent application over the past 10 years

    NASA Astrophysics Data System (ADS)

    Sensui, Takayuki

    2012-10-01

    Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.

  12. Digital photogrammetric analysis of the IMP camera images: Mapping the Mars Pathfinder landing site in three dimensions

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.

    1999-01-01

    This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.

  13. Low-cost camera modifications and methodologies for very-high-resolution digital images

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...

  14. Greenness indices from digital cameras predict the timing and seasonal dynamics of canopy-scale photosynthesis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The proliferation of tower-mounted cameras co-located with eddy covariance instrumentation provides a novel opportunity to better understand the relationship between canopy phenology and the seasonality of canopy photosynthesis. In this paper, we describe the abilities and limitations of webcams to ...

  15. Single-camera microscopic stereo digital image correlation using a diffraction grating.

    PubMed

    Pan, Bing; Wang, Qiong

    2013-10-21

    A simple, cost-effective but practical microscopic 3D-DIC method using a single camera and a transmission diffraction grating is proposed for surface profile and deformation measurement of small-scale objects. By illuminating a test sample with quasi-monochromatic source, the transmission diffraction grating placed in front of the camera can produce two laterally spaced first-order diffraction views of the sample surface into the two halves of the camera target. The single image comprising negative and positive first-order diffraction views can be used to reconstruct the profile of the test sample, while the two single images acquired before and after deformation can be employed to determine the 3D displacements and strains of the sample surface. The basic principles and implementation procedures of the proposed technique for microscopic 3D profile and deformation measurement are described in detail. The effectiveness and accuracy of the presented microscopic 3D-DIC method is verified by measuring the profile and 3D displacements of a regular cylinder surface.

  16. A Digital Readout System For The CSO Microwave Kinetic Inductance Camera

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, Walter; Mazin, B. A.; Zmuidzinas, J.

    2007-12-01

    Submillimeter galaxies are important to the understanding of galaxy formation and evolution. Determination of the spectral energy distribution in the millimeter and submillimeter regimes allows important and powerful diagnostics. Our group is developing a camera for the Caltech Submillimeter Observatory (CSO) using Microwave Kinetic Inductance Detectors (MKIDs). MKIDs are superconducting devices whose impedance changes with the absorption of photons. The camera will have 600 spatial pixels and 4 bands at 750 μm, 850 μm, 1.1 mm and 1.3 mm. For each spatial pixel of the camera the radiation is coupled to the MKIDs using phased-array antennas. This signal is split into 4 different bands using filters and detected using the superconductor as part of a MKID's resonant circuit. The detection process consists of measurement of the changes in the transmission through the resonator when it is illuminated. By designing resonant circuits to have different resonant frequencies and high transmission out resonance, MKIDs can be frequency-domain multiplexed. This allows the simultaneous readout of many detectors through a single coaxial cable. The readout system makes use of microwave IQ modulation and is based on commercial electronics components operating at room temperature. The basic readout has been demonstrated on the CSO. We are working on the implementation of an improved design to be tested on a prototype system with 6x6 pixels and 4 colors next April on the CSO.

  17. Detecting Chlorophyll and Phycocyanin in Lake Texoma Using in Situ Photo from GPS Digital Camera and Landsat 8 OLI Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Hambright, K.; Xiao, X.

    2013-12-01

    Characterizing the temporal and spatial change of algae blooms across lake systems is difficult through conventional sampling methodologies. The application of remote sensing to lake water quality has improved significantly over recent years. However there are seldom reports about in situ photos from GPS digital camera and the new satellite Landsat 8 OLI monitoring algae blooms in freshwater lakes. A pilot study was carried out in Lake Texoma in Oklahoma on April 25th 2013. At each site (12 sites in total), pigments (chlorophyll a and phycocyanin concentration), in situ spectral data and digital photos had been acquired using Hydrolab DS5X sonde (calibrated routinely against laboratory standards), ASD FieldSpec and GPS camera, respectively. The field spectral data sets were transformed to blue, green and red ranges which match the spectral resolution of Landsat 8 OLI images by average spectral reflectance signature to the first four Landsat 8 OLI bands. Comparing with other ratio indices, red/ blue was the best ratio index which can be employed in predicting phycocyanin and chlorophyll a concentration; and pigments (phycocyanin and chlorophyll a) concentration in whole depth should be selected to be detected using remote sensing method in Lake Texoam in the followed analysis. An image based darkest pixel subtraction method was used to process atmospheric correction of Landsat 8 OLI images. After atmospheric correction, the DN values were extracted and used to compute ratio of band4 (Red)/ band1(Blue). Higher correlation coefficients existed in both between resampled spectral reflectance and ratio of red/ blue of photo DN values (R2=0.9425 n=12) and between resampled spectral reflectance and ratio of red/ blue of Landsat 8 OLI images DN values (R2=0.8476 n=12). Finally, we analyzed the correlation between pigments concentrations in whole depth and DN values ratio red/ blue of both Landsat 8 OLI images and digital photos. There were higher correlation coefficients

  18. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    NASA Astrophysics Data System (ADS)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal

  19. Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.

    PubMed

    Smyth, Rachael E; Oram Cardy, Janis; Purcell, David

    2016-06-20

    This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.

  20. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    USGS Publications Warehouse

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than

  1. Digital multi-focusing from a single photograph taken with an uncalibrated conventional camera.

    PubMed

    Cao, Yang; Fang, Shuai; Wang, Zengfu

    2013-09-01

    The demand to restore all-in-focus images from defocused images and produce photographs focused at different depths is emerging in more and more cases, such as low-end hand-held cameras and surveillance cameras. In this paper, we manage to solve this challenging multi-focusing problem with a single image taken with an uncalibrated conventional camera. Different from all existing multi-focusing approaches, our method does not need to include a deconvolution process, which is quite time-consuming and will cause ringing artifacts in the focused region and low depth-of-field. This paper proposes a novel systematic approach to realize multi-focusing from a single photograph. First of all, with the optical explanation for the local smooth assumption, we present a new point-to-point defocus model. Next, the blur map of the input image, which reflects the amount of defocus blur at each pixel in the image, is estimated by two steps. 1) With the sharp edge prior, a rough blur map is obtained by estimating the blur amount at the edge regions. 2) The guided image filter is applied to propagate the blur value from the edge regions to the whole image by which a refined blur map is obtained. Thus far, we can restore the all-in-focus photograph from a defocused input. To further produce photographs focused at different depths, the depth map from the blur map must be derived. To eliminate the ambiguity over the focal plane, user interaction is introduced and a binary graph cut algorithm is used. So we introduce user interaction and use a binary graph cut algorithm to eliminate the ambiguity over the focal plane. Coupled with the camera parameters, this approach produces images focused at different depths. The performance of this new multi-focusing algorithm is evaluated both objectively and subjectively by various test images. Both results demonstrate that this algorithm produces high quality depth maps and multi-focusing results, outperforming the previous approaches.

  2. Digital high-speed camera system for combustion research using UV-laser diagnostic under microgravity at Bremen drop tower

    NASA Astrophysics Data System (ADS)

    Renken, Hartmut; Bolik, T.; Eigenbrod, Ch.; Koenig, Jens; Rath, Hans J.

    1997-04-01

    A digital high-speed camera- and recording system for 2D UV- laser spectroscopy was recently completed at Bremen drip tower. At the moment the primary users are the microgravity combustion researchers. The current project studies the reaction zones during the process of combustion. Particularly OH-radicals are detected 2D by using the method of laser induced predissociation fluorescence (LIPF). A pulsed high-energy excimer lasersystem combined with a two- staged intensified CCD-camera allows a repetition rate of 250 images per second, according to the maximum laser pulse repetition. The laser system is integrated at the top of the 110 m high evacuatable drop tube. Motorized mirrors are necessary to achieve a stable beam position within the area of interest during the drop of the experiment-capsule. The duration of 1 drop will be 4.7 seconds. About 1500 images are captured and stored onboard the drop capsule 96 Mbyte RAM image storagesystem. After saving capsule and data, a special PC-based image processing software visualizes the movies and extracts physical information out of the images. Now, after two and a half years of development the system is working operational and capable of high temporal 2D LIPF- measuring of OH, H2O, O2 and CO concentrations and 2D temperature distribution of these species.

  3. A digital architecture for striping noise compensation in push-broom hyperspectral cameras

    NASA Astrophysics Data System (ADS)

    Valenzuela, Wladimir E.; Figueroa, Miguel; Pezoa, Jorge E.; Meza, Pablo

    2015-09-01

    We present a striping noise compensation architecture for hyperspectral push-broom cameras, implemented on a Field-Programmable Gate Array (FPGA). The circuit is fast, compact, low power, and is capable of eliminating the striping noise in-line during the image acquisition process. The architecture implements a multi dimensional neural network (MDNN) algorithm for striping noise compensation previously reported by our group. The algorithm relies on the assumption that the amount of light impinging at the neighboring photo-detectors is approximately the same in the spatial and spectral dimensions. Under this assumption, two striping noise parameters are estimated using spatial and spectral information from the raw data. We implemented the circuit on a Xilinx ZYNQ XC7Z2010 FPGA and tested it with images obtained from a NIR N17E push-broom camera, with a frame rate of 25fps and a band-pixel rate of 1.888 MHz. The setup consists of a loop of 320 samples of 320 spatial lines and 236 spectral bands between 900 and 1700 nanometers, in laboratory condition, captured with a rigid push-broom controller. The noise compensation core can run at more than 100 MHZ and consumes less than 30mW of dynamic power, using less than 10% of the logic resources available on the chip. It also uses one of two ARM processors available on the FPGA for data acquisition and communication purposes.

  4. Point Cloud Derived Fromvideo Frames: Accuracy Assessment in Relation to Terrestrial Laser Scanningand Digital Camera Data

    NASA Astrophysics Data System (ADS)

    Delis, P.; Zacharek, M.; Wierzbicki, D.; Grochala, A.

    2017-02-01

    The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object) and Sony NEX-VG10 E (for the historic building). In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  5. Study on camera calibration technique of 3D color digitization system

    NASA Astrophysics Data System (ADS)

    Sun, Yuchen; Ge, Baozhen

    2006-11-01

    3D (three-dimensional) color digitization of an object is fulfilled by light-stripe method based on laser triangle principle and direct capturing method based on the color photo of the object. With this system, information matching between 3D and color sensor and data registration of different sensors are fulfilled by a sensor calibration process. The process uses the same round filament target to calibrate all of the sensors together. The principle and procedure of the process are presented in detail. Finally, a costume model is 3D color digitized and the obtaining data sets are processed by the method discussed, the results verify the correctness and feasibility of the algorithm.

  6. Study on key techniques for camera-based hydrological record image digitization

    NASA Astrophysics Data System (ADS)

    Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping

    2015-10-01

    With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.

  7. Noncontact imaging of plethysmographic pulsation and spontaneous low-frequency oscillation in skin perfusion with a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa

    2016-03-01

    A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.

  8. Definition and trade-off study of reconfigurable airborne digital computer system organizations

    NASA Technical Reports Server (NTRS)

    Conn, R. B.

    1974-01-01

    A highly-reliable, fault-tolerant reconfigurable computer system for aircraft applications was developed. The development and application reliability and fault-tolerance assessment techniques are described. Particular emphasis is placed on the needs of an all-digital, fly-by-wire control system appropriate for a passenger-carrying airplane.

  9. Formal methods and their role in digital systems validation for airborne systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1995-01-01

    This report is based on one prepared as a chapter for the FAA Digital Systems Validation Handbook (a guide to assist FAA certification specialists with advanced technology issues). Its purpose is to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used in critical applications; and to suggest factors for consideration when formal methods are offered in support of certification. The presentation concentrates on the rationale for formal methods and on their contribution to assurance for critical applications within a context such as that provided by DO-178B (the guidelines for software used on board civil aircraft); it is intended as an introduction for those to whom these topics are new.

  10. Point light source detection characteristics of a SEC vidicon digital TV camera.

    PubMed

    Dargis, A B

    1978-03-01

    Optimization of the point source detection properties of a Secondary Electron Conduction (SEC) vidicon TV camera tube as a detector of point light sources such as star fields or certain optical spectra requires the accurate determination of peak height, half-peak width, background, and location of the point image. Two perpendicular Gaussian curves have been used to define a point image, allowing changes in the parameters of these Gaussian curves to be used in the study of SEC vidicon point source properties as a function of electrical and optical parameters. Peak height was shown to depend on priming time and a method was developed to reduce the priming time by almost an order of magnitude by momentarily raising the target voltage during priming. Power supply specifications needed for 0.1 pixel (picture element) addressing accuracy were found to be +/-0.03 V. Focus current was optimized to obtain the best sensitivity and resolution over the entire target. Peak height, background, and half-peak width were found to be strongly dependent on readout beam current. Target voltage, over the limited range examined, was found to affect only the gain without compromising other image parameters, so that any value could be used, consistent with gain and sensitivity required.

  11. Digital Intermediate Frequency Receiver Module For Use In Airborne Sar Applications

    DOEpatents

    Tise, Bertice L.; Dubbert, Dale F.

    2005-03-08

    A digital IF receiver (DRX) module directly compatible with advanced radar systems such as synthetic aperture radar (SAR) systems. The DRX can combine a 1 G-Sample/sec 8-bit ADC with high-speed digital signal processor, such as high gate-count FPGA technology or ASICs to realize a wideband IF receiver. DSP operations implemented in the DRX can include quadrature demodulation and multi-rate, variable-bandwidth IF filtering. Pulse-to-pulse (Doppler domain) filtering can also be implemented in the form of a presummer (accumulator) and an azimuth prefilter. An out of band noise source can be employed to provide a dither signal to the ADC, and later be removed by digital signal processing. Both the range and Doppler domain filtering operations can be implemented using a unique pane architecture which allows on-the-fly selection of the filter decimation factor, and hence, the filter bandwidth. The DRX module can include a standard VME-64 interface for control, status, and programming. An interface can provide phase history data to the real-time image formation processors. A third front-panel data port (FPDP) interface can send wide bandwidth, raw phase histories to a real-time phase history recorder for ground processing.

  12. In situ particle size distributions and volume concentrations from a LISST-100 laser particle sizer and a digital floc camera

    NASA Astrophysics Data System (ADS)

    Mikkelsen, Ole A.; Hill, Paul S.; Milligan, Timothy G.; Chant, Robert J.

    2005-10-01

    A LISST-100 in situ laser particle sizer was deployed together with a digital floc camera during field work in the Newark Bay area (USA) and along the Apennine margin (the Adriatic Sea, Italy). The purpose of these simultaneous deployments was to investigate how well in situ particle (floc) sizes and volume concentrations from the two different instruments compared. In the Adriatic Sea the two instruments displayed the same temporal variation, but the LISST provided lower estimates of floc size by a factor of 2-3, compared to the DFC. In the Newark Bay area, the LISST provided higher values of floc size by up to a factor of 2. When floc size was computed using only the overlapping size bins from the two instruments the discrepancy disappeared. The reason for the discrepancy in size was found to be related to several issues: First, the LISST measured particles in the 2.5-500 μm range, whereas the camera measured particles in the 135-9900 μm range, so generally the LISST should provide lower estimates of floc size, as it measures the smaller particles. Second, in the Newark Bay area scattering from particles >500 μm generally caused the LISST to overestimate the volume of particles in its largest size bin, thereby increasing apparent floc size. Relative to the camera, the LISST generally provided estimates of total floc volume that were lower by a factor of 3. Factors that could explain this discrepancy are errors arising from the accuracy of the LISST volume conversion coefficient and image processing. Regardless of these discrepancies, the shapes of the size spectra from the instruments were similar in the regions of overlap and could be matched by multiplying with an appropriate correction coefficient. This facilitated merging of the size spectra from the LISST and the DFC, yielding size spectra in the 2.5-9900 μm range. The merged size spectra generally had one or more peaks in the coarse end of the spectrum, presumably due to the presence of flocs. The fine

  13. Levee crest elevation profiles derived from airborne lidar-based high resolution digital elevation models in south Louisiana

    USGS Publications Warehouse

    Palaseanu-Lovejoy, Monica; Thatcher, Cindy A.; Barras, John A.

    2014-01-01

    This study explores the feasibility of using airborne lidar surveys to construct high-resolution digital elevation models (DEMs) and develop an automated procedure to extract levee longitudinal elevation profiles for both federal levees in Atchafalaya Basin and local levees in Lafourche Parish, south Lousiana. This approach can successfully accommodate a high degree of levee sinuosity and abrupt changes in levee orientation (direction) in planar coordinates, variations in levee geometries, and differing DEM resolutions. The federal levees investigated in Atchafalaya Basin have crest elevations between 5.3 and 12 m while the local counterparts in Lafourche Parish are between 0.76 and 2.3 m. The vertical uncertainty in the elevation data is considered when assessing federal crest elevation against the U.S. Army Corps of Engineers minimum height requirements to withstand the 100-year flood. Only approximately 5% of the crest points of the two federal levees investigated in the Atchafalaya Basin region met this requirement.

  14. Airborne digital-image data for monitoring the Colorado River corridor below Glen Canyon Dam, Arizona, 2009 - Image-mosaic production and comparison with 2002 and 2005 image mosaics

    USGS Publications Warehouse

    Davis, Philip A.

    2012-01-01

    Airborne digital-image data were collected for the Arizona part of the Colorado River ecosystem below Glen Canyon Dam in 2009. These four-band image data are similar in wavelength band (blue, green, red, and near infrared) and spatial resolution (20 centimeters) to image collections of the river corridor in 2002 and 2005. These periodic image collections are used by the Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey to monitor the effects of Glen Canyon Dam operations on the downstream ecosystem. The 2009 collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits, unlike the image sensors that GCMRC used in 2002 and 2005. This study examined the performance of the SH52 sensor, on the basis of the collected image data, and determined that the SH52 sensor provided superior data relative to the previously employed sensors (that is, an early ADS40 model and Zeiss Imaging's Digital Mapping Camera) in terms of band-image registration, dynamic range, saturation, linearity to ground reflectance, and noise level. The 2009 image data were provided as orthorectified segments of each flightline to constrain the size of the image files; each river segment was covered by 5 to 6 overlapping, linear flightlines. Most flightline images for each river segment had some surface-smear defects and some river segments had cloud shadows, but these two conditions did not generally coincide in the majority of the overlapping flightlines for a particular river segment. Therefore, the final image mosaic for the 450-kilometer (km)-long river corridor required careful selection and editing of numerous flightline segments (a total of 513 segments, each 3.2 km long) to minimize surface defects and cloud shadows. The final image mosaic has a total of only 3 km of surface defects. The final image mosaic for the western end of the corridor has

  15. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  16. Noninvasive imaging of human skin hemodynamics using a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Tanaka, Noriyuki; Kawase, Tatsuya; Maeda, Takaaki; Yuasa, Tomonori; Aizu, Yoshihisa; Yuasa, Tetsuya; Niizeki, Kyuichi

    2011-08-01

    In order to visualize human skin hemodynamics, we investigated a method that is specifically developed for the visualization of concentrations of oxygenated blood, deoxygenated blood, and melanin in skin tissue from digital RGB color images. Images of total blood concentration and oxygen saturation can also be reconstructed from the results of oxygenated and deoxygenated blood. Experiments using tissue-like agar gel phantoms demonstrated the ability of the developed method to quantitatively visualize the transition from an oxygenated blood to a deoxygenated blood in dermis. In vivo imaging of the chromophore concentrations and tissue oxygen saturation in the skin of the human hand are performed for 14 subjects during upper limb occlusion at 50 and 250 mm Hg. The response of the total blood concentration in the skin acquired by this method and forearm volume changes obtained from the conventional strain-gauge plethysmograph were comparable during the upper arm occlusion at pressures of both 50 and 250 mm Hg. The results presented in the present paper indicate the possibility of visualizing the hemodynamics of subsurface skin tissue.

  17. Conventional digital cameras as a tool for assessing leaf area index and biomass for cereal breeding.

    PubMed

    Casadesús, Jaume; Villegas, Dolors

    2014-01-01

    Affordable and easy-to-use methods for assessing biomass and leaf area index (LAI) would be of interest in most breeding programs. Here, we describe the evaluation of a protocol for photographic sampling and image analysis aimed at providing low-labor yet robust indicators of biomass and LAI. In this trial, two genotypes of triticale, two of bread wheat, and four of tritordeum were studied. At six dates during the growing cycle, biomass and LAI were measured destructively, and digital photography was taken on the same dates. Several vegetation indices were calculated from each image. The results showed that repeatable and consistent values of the indices were obtained in consecutive photographic samplings on the same plots. The photographic indices were highly correlated with the destructive measurements, though the magnitude of the correlation was lower after anthesis. This work shows that photographic assessment of biomass and LAI can be fast, affordable, have good repeatability, and can be used under bright and overcast skies. A practical vegetation index derived from pictures is the fraction of green pixels over the total pixels of the image, and as it shows good correlations with all biomass variables, is the most robust to lighting conditions and has easy interpretation.

  18. Design of a fault tolerant airborne digital computer. Volume 1: Architecture

    NASA Technical Reports Server (NTRS)

    Wensley, J. H.; Levitt, K. N.; Green, M. W.; Goldberg, J.; Neumann, P. G.

    1973-01-01

    This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive.

  19. Design of a fault tolerant airborne digital computer. Volume 2: Computational requirements and technology

    NASA Technical Reports Server (NTRS)

    Ratner, R. S.; Shapiro, E. B.; Zeidler, H. M.; Wahlstrom, S. E.; Clark, C. B.; Goldberg, J.

    1973-01-01

    This final report summarizes the work on the design of a fault tolerant digital computer for aircraft. Volume 2 is composed of two parts. Part 1 is concerned with the computational requirements associated with an advanced commercial aircraft. Part 2 reviews the technology that will be available for the implementation of the computer in the 1975-1985 period. With regard to the computation task 26 computations have been categorized according to computational load, memory requirements, criticality, permitted down-time, and the need to save data in order to effect a roll-back. The technology part stresses the impact of large scale integration (LSI) on the realization of logic and memory. Also considered was module interconnection possibilities so as to minimize fault propagation.

  20. Solar-Powered Airplane with Cameras and WLAN

    NASA Technical Reports Server (NTRS)

    Higgins, Robert G.; Dunagan, Steve E.; Sullivan, Don; Slye, Robert; Brass, James; Leung, Joe G.; Gallmeyer, Bruce; Aoyagi, Michio; Wei, Mei Y.; Herwitz, Stanley R.; Johnson, Lee; Arvesen, John C.

    2004-01-01

    An experimental airborne remote sensing system includes a remotely controlled, lightweight, solar-powered airplane (see figure) that carries two digital-output electronic cameras and communicates with a nearby ground control and monitoring station via a wireless local-area network (WLAN). The speed of the airplane -- typically <50 km/h -- is low enough to enable loitering over farm fields, disaster scenes, or other areas of interest to collect high-resolution digital imagery that could be delivered to end users (e.g., farm managers or disaster-relief coordinators) in nearly real time.

  1. A simultaneous charge and size measurement method for individual airborne particles using digital holographic particle imaging

    NASA Astrophysics Data System (ADS)

    Hammond, Adam; Dou, Zhongwang; Liang, Zach; Meng, Hui

    2016-11-01

    Recently, significant inquiry to understand the effects of particle charge on particle laden flow have been made, particularly in the study of Lagrangian particle-pair statistics. Quantification of individual particle charge allows relation of inter-particle electric forces and turbulence-induced forces. Here we offer a simultaneous, individual particle charge and size measurement technique utilizing in-line digital holographic Particle Tracking Velocimetry (hPTV). The method measures particle electric mobility through its velocity response within a uniform electric field using a sequence of holograms, next the particle diameter is measured with the same holograms using a matched-filter developed by Lu et al. (2012) as an input for calculation of charge. Consequently, a benefit of this method is that particle charge is calculated on the individual level, versus a mean charge calculated from a group of particles, offering improved estimations of charge distributions for studies of particle laden flow. This work was supported by NSF CBET-0967407 and CBET-0967349.

  2. Waste reduction efforts through evaluation and procurement of a digital camera system for the Alpha-Gamma Hot Cell Facility at Argonne National Laboratory-East.

    SciTech Connect

    Bray, T. S.; Cohen, A. B.; Tsai, H.; Kettman, W. C.; Trychta, K.

    1999-11-08

    The Alpha-Gamma Hot Cell Facility (AGHCF) at Argonne National Laboratory-East is a research facility where sample examinations involve traditional photography. The AGHCF documents samples with photographs (both Polaroid self-developing and negative film). Wastes generated include developing chemicals. The AGHCF evaluated, procured, and installed a digital camera system for the Leitz metallograph to significantly reduce labor, supplies, and wastes associated with traditional photography with a return on investment of less than two years.

  3. Processor architecture for airborne SAR systems

    NASA Technical Reports Server (NTRS)

    Glass, C. M.

    1983-01-01

    Digital processors for spaceborne imaging radars and application of the technology developed for airborne SAR systems are considered. Transferring algorithms and implementation techniques from airborne to spaceborne SAR processors offers obvious advantages. The following topics are discussed: (1) a quantification of the differences in processing algorithms for airborne and spaceborne SARs; and (2) an overview of three processors for airborne SAR systems.

  4. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    SciTech Connect

    Tokurei, Shogo E-mail: junjim@med.kyushu-u.ac.jp; Morishita, Junji E-mail: junjim@med.kyushu-u.ac.jp

    2015-08-15

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors

  5. Using digital time-lapse cameras to monitor species-specific understorey and overstorey phenology in support of wildlife habitat assessment.

    PubMed

    Bater, Christopher W; Coops, Nicholas C; Wulder, Michael A; Hilker, Thomas; Nielsen, Scott E; McDermid, Greg; Stenhouse, Gordon B

    2011-09-01

    Critical to habitat management is the understanding of not only the location of animal food resources, but also the timing of their availability. Grizzly bear (Ursus arctos) diets, for example, shift seasonally as different vegetation species enter key phenological phases. In this paper, we describe the use of a network of seven ground-based digital camera systems to monitor understorey and overstorey vegetation within species-specific regions of interest. Established across an elevation gradient in western Alberta, Canada, the cameras collected true-colour (RGB) images daily from 13 April 2009 to 27 October 2009. Fourth-order polynomials were fit to an RGB-derived index, which was then compared to field-based observations of phenological phases. Using linear regression to statistically relate the camera and field data, results indicated that 61% (r (2) = 0.61, df = 1, F = 14.3, p = 0.0043) of the variance observed in the field phenological phase data is captured by the cameras for the start of the growing season and 72% (r (2) = 0.72, df = 1, F = 23.09, p = 0.0009) of the variance in length of growing season. Based on the linear regression models, the mean absolute differences in residuals between predicted and observed start of growing season and length of growing season were 4 and 6 days, respectively. This work extends upon previous research by demonstrating that specific understorey and overstorey species can be targeted for phenological monitoring in a forested environment, using readily available digital camera technology and RGB-based vegetation indices.

  6. How to optimize radiological images captured from digital cameras, using the Adobe Photoshop 6.0 program.

    PubMed

    Chalazonitis, A N; Koumarianos, D; Tzovara, J; Chronopoulos, P

    2003-06-01

    Over the past decade, the technology that permits images to be digitized and the reduction in the cost of digital equipment allows quick digital transfer of any conventional radiological film. Images then can be transferred to a personal computer, and several software programs are available that can manipulate their digital appearance. In this article, the fundamentals of digital imaging are discussed, as well as the wide variety of optional adjustments that the Adobe Photoshop 6.0 (Adobe Systems, San Jose, CA) program can offer to present radiological images with satisfactory digital imaging quality.

  7. Land cover/use classification of Cairns, Queensland, Australia: A remote sensing study involving the conjunctive use of the airborne imaging spectrometer, the large format camera and the thematic mapper simulator

    NASA Technical Reports Server (NTRS)

    Heric, Matthew; Cox, William; Gordon, Daniel K.

    1987-01-01

    In an attempt to improve the land cover/use classification accuracy obtainable from remotely sensed multispectral imagery, Airborne Imaging Spectrometer-1 (AIS-1) images were analyzed in conjunction with Thematic Mapper Simulator (NS001) Large Format Camera color infrared photography and black and white aerial photography. Specific portions of the combined data set were registered and used for classification. Following this procedure, the resulting derived data was tested using an overall accuracy assessment method. Precise photogrammetric 2D-3D-2D geometric modeling techniques is not the basis for this study. Instead, the discussion exposes resultant spectral findings from the image-to-image registrations. Problems associated with the AIS-1 TMS integration are considered, and useful applications of the imagery combination are presented. More advanced methodologies for imagery integration are needed if multisystem data sets are to be utilized fully. Nevertheless, research, described herein, provides a formulation for future Earth Observation Station related multisensor studies.

  8. Ground-based detection of nighttime clouds above Manila Observatory (14.64°N, 121.07°E) using a digital camera.

    PubMed

    Gacal, Glenn Franco B; Antioquia, Carlo; Lagrosas, Nofel

    2016-08-01

    Ground-based cloud detection at nighttime is achieved by using cameras, lidars, and ceilometers. Despite these numerous instruments gathering cloud data, there is still an acknowledged scarcity of information on quantified local cloud cover, especially at nighttime. In this study, a digital camera is used to continuously collect images near the sky zenith at nighttime in an urban environment. An algorithm is developed to analyze pixel values of images of nighttime clouds. A minimum threshold pixel value of 17 is assigned to determine cloud occurrence. The algorithm uses temporal averaging to estimate the cloud fraction based on the results within the limited field of view. The analysis of the data from the months of January, February, and March 2015 shows that cloud occurrence is low during the months with relatively lower minimum temperature (January and February), while cloud occurrence during the warmer month (March) increases.

  9. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study

    PubMed Central

    Leitritz, Martin Alexander; Ziemssen, Focke; Bartz-Schmidt, Karl Ulrich; Voykov, Bogomil

    2014-01-01

    Purpose To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. Materials and methods A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ. Results Two eyes from each of five patients (median age 32 years, range 28–45 years) without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were −0.32 mm (range −0.69 to 0.024) and 0.175 mm (range −0.37 to 0.45), respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02–0.84). There was a slight positive correlation (r=0.39, P<0.001) between the grade of deviation in the primary position and the distance increase triggered by movements. Conclusion With the use of a slit lamp-mounted high-speed camera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements. Long-term assessment by high-speed analysis with higher case numbers has to clarify the relationship between progressing motility and endothelial cell damage. PMID:25071365

  10. Optical engineering application of modeled photosynthetically active radiation (PAR) for high-speed digital camera dynamic range optimization

    NASA Astrophysics Data System (ADS)

    Alves, James; Gueymard, Christian A.

    2009-08-01

    As efforts to create accurate yet computationally efficient estimation models for clear-sky photosynthetically active solar radiation (PAR) have succeeded, the range of practical engineering applications where these models can be successfully applied has increased. This paper describes a novel application of the REST2 radiative model (developed by the second author) in optical engineering. The PAR predictions in this application are used to predict the possible range of instantaneous irradiances that could impinge on the image plane of a stationary video camera designed to image license plates on moving vehicles. The overall spectral response of the camera (including lens and optical filters) is similar to the 400-700 nm PAR range, thereby making PAR irradiance (rather than luminance) predictions most suitable for this application. The accuracy of the REST2 irradiance predictions for horizontal surfaces, coupled with another radiative model to obtain irradiances on vertical surfaces, and to standard optical image formation models, enable setting the dynamic range controls of the camera to ensure that the license plate images are legible (unsaturated with adequate contrast) regardless of the time of day, sky condition, or vehicle speed. A brief description of how these radiative models are utilized as part of the camera control algorithm is provided. Several comparisons of the irradiance predictions derived from the radiative model versus actual PAR measurements under varying sky conditions with three Licor sensors (one horizontal and two vertical) have been made and showed good agreement. Various camera-to-plate geometries and compass headings have been considered in these comparisons. Time-lapse sequences of license plate images taken with the camera under various sky conditions over a 30-day period are also analyzed. They demonstrate the success of the approach at creating legible plate images under highly variable lighting, which is the main goal of this

  11. Measurement of Young’s modulus and Poisson’s ratio of metals by means of ESPI using a digital camera

    NASA Astrophysics Data System (ADS)

    Francisco, J. B. Pascual; Michtchenko, A.; Barragán Pérez, O.; Susarrey Huerta, O.

    2016-09-01

    In this paper, mechanical experiments with a low-cost interferometry set-up are presented. The set-up is suitable for an undergraduate laboratory where optical equipment is absent. The arrangement consists of two planes of illumination, allowing the measurement of the two perpendicular in-plane displacement directions. An axial load was applied on three different metals, and the longitudinal and transversal displacements were measured sequentially. A digital camera was used to acquire the images of the different states of load of the illuminated area. A personal computer was used to perform the digital subtraction of the images to obtain the fringe correlations, which are needed to calculate the displacements. Finally, Young’s modulus and Poisson’s ratio of the metals were calculated using the displacement data.

  12. Experimental Demonstration of Extended Depth-of-Field F/1.2 Visible High Definition Camera with Jointly Optimized Phase Mask and Real-Time Digital Processing

    NASA Astrophysics Data System (ADS)

    Burcklen, M.-A.; Diaz, F.; Lepretre, F.; Rollin, J.; Delboulbé, A.; Lee, M.-S. L.; Loiseaux, B.; Koudoli, A.; Denel, S.; Millet, P.; Duhem, F.; Lemonnier, F.; Sauer, H.; Goudail, F.

    2015-10-01

    Increasing the depth of field (DOF) of compact visible high resolution cameras while maintaining high imaging performance in the DOF range is crucial for such applications as night vision goggles or industrial inspection. In this paper, we present the end-to-end design and experimental validation of an extended depth-of-field visible High Definition camera with a very small f-number, combining a six-ring pyramidal phase mask in the aperture stop of the lens with a digital deconvolution. The phase mask and the deconvolution algorithm are jointly optimized during the design step so as to maximize the quality of the deconvolved image over the DOF range. The deconvolution processing is implemented in real-time on a Field-Programmable Gate Array and we show that it requires very low power consumption. By mean of MTF measurements and imaging experiments we experimentally characterize the performance of both cameras with and without phase mask and thereby demonstrate a significant increase in depth of field of a factor 2.5, as it was expected in the design step.

  13. Real-time photothermoplastic 8-inch camera with an emphasis on the visualization of 3D digital data by holographic means

    NASA Astrophysics Data System (ADS)

    Cherkasov, Yuri A.; Alexandrova, Elena L.; Rumjantsev, Alexander G.; Smirnov, Mikhail V.

    1995-04-01

    The development and investigations of large-formate (8-inch) real-time photothermoplastic (PTP) camera are carried out. The PTP camera is applied for operative recording of 3D- images by means of compound and digital holography and visualization of these holograms as 3D-static images. The optimization of the recording regimes is fulfilled with use the model of the relief-phase PTP images thermodevelopment, proposed by authors. According with that model, the achievement of maximal value of deformation (diffraction efficiency) is based on the opportunity in increasing of charge contrast of electrostatic latent image formed early by the moment of the viscosity decreasing during the thermodevelopment process. It is achieved by means of the control of the thermodevelopment regime. Also, the opportunities of the increase of the camera size (to 14 inch), of the rising of photosensitivity value and the enlarging of its spectral range, of the creation of Benton holograms and of the increasing of the speed of response to 25 Hz are discussed.

  14. Getting the Picture: Using the Digital Camera as a Tool to Support Reflective Practice and Responsive Care

    ERIC Educational Resources Information Center

    Luckenbill, Julia

    2012-01-01

    Many early childhood educators use cameras to share the charming things that children do and the artwork they make. Programs often bind these photographs into portfolios and give them to children and their families as mementos at the end of the year. In the author's classrooms, they use photography on a daily basis to document children's…

  15. Anger Camera Firmware

    SciTech Connect

    2010-11-19

    The firmware is responsible for the operation of Anger Camera Electronics, calculation of position, time of flight and digital communications. It provides a first stage analysis of 48 signals from 48 analog signals that have been converted to digital values using A/D convertors.

  16. The Laser Vegetation Imaging Sensor (LVIS): A Medium-Altitude, Digitization-Only, Airborne Laser Altimeter for Mapping Vegetation and Topography

    NASA Technical Reports Server (NTRS)

    Blair, J. Bryan; Rabine, David L.; Hofton, Michelle A.

    1999-01-01

    The Laser Vegetation Imaging Sensor (LVIS) is an airborne, scanning laser altimeter designed and developed at NASA's Goddard Space Flight Center. LVIS operates at altitudes up to 10 km above ground, and is capable of producing a data swath up to 1000 m wide nominally with 25 m wide footprints. The entire time history of the outgoing and return pulses is digitized, allowing unambiguous determination of range and return pulse structure. Combined with aircraft position and attitude knowledge, this instrument produces topographic maps with decimeter accuracy and vertical height and structure measurements of vegetation. The laser transmitter is a diode-pumped Nd:YAG oscillator producing 1064 nm, 10 nsec, 5 mJ pulses at repetition rates up to 500 Hz. LVIS has recently demonstrated its ability to determine topography (including sub-canopy) and vegetation height and structure on flight missions to various forested regions in the U.S. and Central America. The LVIS system is the airborne simulator for the Vegetation Canopy Lidar (VCL) mission (a NASA Earth remote sensing satellite due for launch in 2000), providing simulated data sets and a platform for instrument proof-of-concept studies. The topography maps and return waveforms produced by LVIS provide Earth scientists with a unique data set allowing studies of topography, hydrology, and vegetation with unmatched accuracy and coverage.

  17. Estimation of the Spectral Sensitivity Functions of Un-Modified and Modified Commercial Off-The Digital Cameras to Enable Their Use as a Multispectral Imaging System for Uavs

    NASA Astrophysics Data System (ADS)

    Berra, E.; Gibson-Poole, S.; MacArthur, A.; Gaulton, R.; Hamilton, A.

    2015-08-01

    Commercial off-the-shelf (COTS) digital cameras on-board unmanned aerial vehicles (UAVs) have the potential to be used as multispectral imaging systems; however, their spectral sensitivity is usually unknown and needs to be either measured or estimated. This paper details a step by step methodology for identifying the spectral sensitivity of modified (to be response to near infra-red wavelengths) and un-modified COTS digital cameras, showing the results of its application for three different models of camera. Six digital still cameras, which are being used as imaging systems on-board different UAVs, were selected to have their spectral sensitivities measured by a monochromator. Each camera was exposed to monochromatic light ranging from 370 nm to 1100 nm in 10 nm steps, with images of each step recorded in RAW format. The RAW images were converted linearly into TIFF images using DCRaw, an open-source program, before being batch processed through ImageJ (also open-source), which calculated the mean and standard deviation values from each of the red-green-blue (RGB) channels over a fixed central region within each image. These mean values were then related to the relative spectral radiance from the monochromator and its integrating sphere, in order to obtain the relative spectral response (RSR) for each of the cameras colour channels. It was found that different un-modified camera models present very different RSR in some channels, and one of the modified cameras showed a response that was unexpected. This highlights the need to determine the RSR of a camera before using it for any quantitative studies.

  18. Target-Tracking Camera for a Metrology System

    NASA Technical Reports Server (NTRS)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  19. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  20. The necessity of exterior orientation parameters for the rigorous geometric correction of MEIS-II airborne digital images

    SciTech Connect

    Bannari, A.; Morin, D.; Gibson, J.R.

    1996-11-01

    The Canada Land Use Monitoring Program is attempting to replace aerial photographs by remote sensing imagery (satellite or airborne). The Canada Center for Remote Sensing (CCRS) is implementing an airborne multi-detector electro-optical imaging system (MEIS-II). The acceptance of airborne scanners has been slow principally due to poor spatial resolution and distortions induced by aircraft motion. To address this geometric problem, CCRS has developed a rigorous correction method based on fundamental photogrammetric principles (collinearity and coplanarity) and auxiliary navigation data (attitude, altitude and aircraft speed) measured in relation to time by an inertial navigation system (INS). The method can process images in monoscopy or stereoscopy. It uses primarily a low-order polynomial function for correcting auxiliary data based on the method of least squares and a few control points. The results are then used in the geometric correction procedure. In this study, we discuss the effect of geometric distortions caused by aircraft motion and we test two geometric correction methods. The first method is the one developed by CCRS mentioned above. The second method is based on a second order polynomial function. The effect of control point precision on the reliability of the geometric correction using geodetic points and other points derived from the 1/20 000 topographical map is examined. The results show a noticeable difference between the two approaches tested. The photogrammetric method, based on the condition of collinearity and coplanarity, and related to navigation data, results in precision in the order of one pixel with geodetic control points. The use of geodetic control points permits the elimination of the planimetric error characteristic of the topographical map. The polynomial method provides precision which is in the order of five pixels whatever the type and precision of the control points. 18 refs., 6 figs., 2 tabs.

  1. Integration of airborne Thematic Mapper Simulator (TMS) data and digitized aerial photography via an ISH transformation. [Intensity Saturation Hue

    NASA Technical Reports Server (NTRS)

    Ambrosia, Vincent G.; Myers, Jeffrey S.; Ekstrand, Robert E.; Fitzgerald, Michael T.

    1991-01-01

    A simple method for enhancing the spatial and spectral resolution of disparate data sets is presented. Two data sets, digitized aerial photography at a nominal spatial resolution 3,7 meters and TMS digital data at 24.6 meters, were coregistered through a bilinear interpolation to solve the problem of blocky pixel groups resulting from rectification expansion. The two data sets were then subjected to intensity-saturation-hue (ISH) transformations in order to 'blend' the high-spatial-resolution (3.7 m) digitized RC-10 photography with the high spectral (12-bands) and lower spatial (24.6 m) resolution TMS digital data. The resultant merged products make it possible to perform large-scale mapping, ease photointerpretation, and can be derived for any of the 12 available TMS spectral bands.

  2. The DSLR Camera

    NASA Astrophysics Data System (ADS)

    Berkó, Ernő; Argyle, R. W.

    Cameras have developed significantly in the past decade; in particular, digital Single-Lens Reflex Cameras (DSLR) have appeared. As a consequence we can buy cameras of higher and higher pixel number, and mass production has resulted in the great reduction of prices. CMOS sensors used for imaging are increasingly sensitive, and the electronics in the cameras allows images to be taken with much less noise. The software background is developing in a similar way—intelligent programs are created for after-processing and other supplementary works. Nowadays we can find a digital camera in almost every household, most of these cameras are DSLR ones. These can be used very well for astronomical imaging, which is nicely demonstrated by the amount and quality of the spectacular astrophotos appearing in different publications. These examples also show how much post-processing software contributes to the rise in the standard of the pictures. To sum up, the DSLR camera serves as a cheap alternative for the CCD camera, with somewhat weaker technical characteristics. In the following, I will introduce how we can measure the main parameters (position angle and separation) of double stars, based on the methods, software and equipment I use. Others can easily apply these for their own circumstances.

  3. In vivo multispectral imaging of the absorption and scattering properties of exposed brain using a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Yoshida, Keiichiro; Ishizuka, Tomohiro; Mizushima, Chiharu; Nishidate, Izumi; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu

    2015-04-01

    To evaluate multi-spectral images of the absorption and scattering properties in the cerebral cortex of rat brain, we investigated spectral reflectance images estimated by the Wiener estimation method using a digital red-green-blue camera. A Monte Carlo simulation-based multiple regression analysis for the corresponding spectral absorbance images at nine wavelengths (500, 520, 540, 560, 570, 580, 600, 730, and 760 nm) was then used to specify the absorption and scattering parameters. The spectral images of absorption and reduced scattering coefficients were reconstructed from the absorption and scattering parameters. We performed in vivo experiments on exposed rat brain to confirm the feasibility of this method. The estimated images of the absorption coefficients were dominated by hemoglobin spectra. The estimated images of the reduced scattering coefficients had a broad scattering spectrum, exhibiting a larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature.

  4. Hierarchical object-based classification of ultra-high-resolution digital mapping camera (DMC) imagery for rangeland mapping and assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Ultra high resolution digital aerial photography has great potential to complement or replace ground measurements of vegetation cover for rangeland monitoring and assessment. We investigated object-based image analysis (OBIA) techniques for classifying vegetation in southwestern U.S. arid rangelands...

  5. ProgRes 3000: a digital color camera with a 2-D array CCD sensor and programmable resolution up to 2994 x 2320 picture elements

    NASA Astrophysics Data System (ADS)

    Lenz, Reimar K.; Lenz, Udo

    1990-11-01

    A newly developed imaging principle two dimensional microscanning with Piezo-controlled Aperture Displacement (PAD) allows for high image resolutions. The advantages of line scanners (high resolution) are combined with those of CCD area sensors (high light sensitivity geometrical accuracy and stability easy focussing illumination control and selection of field of view by means of TV real-time imaging). A custom designed sensor optimized for small sensor element apertures and color fidelity eliminates the need for color filter revolvers or mechanical shutters and guarantees good color convergence. By altering the computer controlled microscan patterns spatial and temporal resolution become interchangeable their product being a constant. The highest temporal resolution is TV real-time (50 fields/sec) the highest spatial resolution is 2994 x 2320 picture elements (Pels) for each of the three color channels (28 MBytes of raw image data in 8 see). Thus for the first time it becomes possible to take 35mm slide quality still color images of natural 3D scenes by purely electronic means. Nearly " square" Pels as well as hexagonal sampling schemes are possible. Excellent geometrical accuracy and low noise is guaranteed by sensor element (Sel) synchronous analog to digital conversion within the camera head. The cameras principle of operation and the procedure to calibrate the two-dimensional piezo-mechanical motion with an accuracy of better than O. 2. tm RMSE in image space is explained. The remaining positioning inaccuracy may be further

  6. Application of a digital high-speed camera system for combustion research by using UV laser diagnostics under microgravity at Bremen drop tower

    NASA Astrophysics Data System (ADS)

    Renken, Hartmut; Bolik, T.; Eigenbrod, Ch.; Koenig, Jens; Rath, Hans J.

    1997-05-01

    This paper describes a digital high-speed camera- and recording system that will be used primary for combustion research under microgravity ((mu) g) at Bremen drop tower. To study the reactionzones during the process of combustion particularly OH-radicals are detected 2D by using the method of laser induced predissociation fluorescence (LIPF). A pulsed high-energy excimer lasersystem combined with a two- staged intensified CCD-camera allows a repetition rate of 250 images (256 X 256 pixel) per second, according to the maximum laser pulse repetition. The laser system is integrated at the top of the 110 m high evacutable drop tube. Motorized mirrors are necessary to achieve a stable beam position within the area of interest during the drop of the experiment-capsule. The duration of 1 drop will be 4.7 seconds (microgravity conditions). About 1500 images are captured and stored onboard the drop capsule 96 Mbyte RAM image storagesystem. After saving capsule and datas, a special PC-based image processing software visualizes the movies and extracts physical information out of the images. Now, after two and a half years of developments the system is working operational and capable of high temporal 2D LIPF- measuring of OH, H2O, O2, and CO concentrations and 2D temperature distribution of these species.

  7. Camera-based measurement for transverse vibrations of moving catenaries in mine hoists using digital image processing techniques

    NASA Astrophysics Data System (ADS)

    Yao, Jiannan; Xiao, Xingming; Liu, Yao

    2016-03-01

    This paper proposes a novel, non-contact, sensing method to measure the transverse vibrations of hoisting catenaries in mine hoists. Hoisting catenaries are typically moving cables and it is not feasible to use traditional methods to measure their transverse vibrations. In order to obtain the transverse displacements of an arbitrary point in a moving catenary, by superposing a mask image having the predefined reference line perpendicular to the hoisting catenaries on each frame of the processed image sequence, the dynamic intersecting points with a grey value of 0 in the image sequence could be identified. Subsequently, by traversing the coordinates of the pixel with a grey value of 0 and calculating the distance between the identified dynamic points from the reference, the transverse displacements of the selected arbitrary point in the hoisting catenary can be obtained. Furthermore, based on a theoretical model, the reasonability and applicability of the proposed camera-based method were confirmed. Additionally, a laboratory experiment was also carried out, which then validated the accuracy of the proposed method. The research results indicate that the proposed camera-based method is suitable for the measurement of the transverse vibrations of moving cables.

  8. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera

    SciTech Connect

    Miller, Brian W.; Frost, Sofia H. L.; Frayo, Shani L.; Kenoyer, Aimee L.; Santos, Erlinda; Jones, Jon C.; Orozco, Johnnie J.; Green, Damian J.; Press, Oliver W.; Pagel, John M.; Sandmaier, Brenda M.; Hamlin, Donald K.; Wilbur, D. Scott; Fisher, Darrell R.

    2015-07-15

    Purpose: Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50–80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. Methods: The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 ({sup 211}At) activity distributions in cryosections of murine and canine tissue samples. Results: The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10{sup −4} cpm/cm{sup 2} (40 mm diameter detector area

  9. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera

    PubMed Central

    Miller, Brian W.; Frost, Sofia H. L.; Frayo, Shani L.; Kenoyer, Aimee L.; Santos, Erlinda; Jones, Jon C.; Green, Damian J.; Hamlin, Donald K.; Wilbur, D. Scott; Orozco, Johnnie J.; Press, Oliver W.; Pagel, John M.; Sandmaier, Brenda M.

    2015-01-01

    Purpose: Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50–80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. Methods: The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 (211At) activity distributions in cryosections of murine and canine tissue samples. Results: The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10−4 cpm/cm2 (40 mm diameter detector area). Simultaneous imaging of

  10. Dry imaging cameras.

    PubMed

    Indrajit, Ik; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-04-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  11. Single chip camera active pixel sensor

    NASA Technical Reports Server (NTRS)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  12. Design of a modular digital computer system DRL 4 and 5. [design of airborne/spaceborne computer system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Design and development efforts for a spaceborne modular computer system are reported. An initial baseline description is followed by an interface design that includes definition of the overall system response to all classes of failure. Final versions for the register level designs for all module types were completed. Packaging, support and control executive software, including memory utilization estimates and design verification plan, were formalized to insure a soundly integrated design of the digital computer system.

  13. Effects of image compression and illumination on digital terrain models for the stereo camera of the BepiColombo mission

    NASA Astrophysics Data System (ADS)

    Re, C.; Simioni, E.; Cremonese, G.; Roncella, R.; Forlani, G.; Langevin, Y.; Da Deppo, V.; Naletto, G.; Salemi, G.

    2017-02-01

    The great amount of data that will be produced during the imaging of Mercury by the stereo camera (STC) of the BepiColombo mission needs a compromise with the restrictions imposed by the band downlink that could drastically reduce the duration and frequency of the observations. The implementation of an on-board real time data compression strategy preserving as much information as possible is therefore mandatory. The degradation that image compression might cause to the DTM accuracy is worth to be investigated. During the stereo-validation procedure of the innovative STC imaging system, several image pairs of an anorthosite sample and a modelled piece of concrete have been acquired under different illumination angles. This set of images has been used to test the effects of the compression algorithm (Langevin and Forni, 2000) on the accuracy of the DTM produced by dense image matching. Different configurations taking in account at the same time both the illumination of the surface and the compression ratio, have been considered. The accuracy of the DTMs is evaluated by comparison with a high resolution laser-scan acquisition of the same targets. The error assessment includes also an analysis on the image plane indicating the influence of the compression procedure on the image measurements.

  14. New approach to color calibration of high fidelity color digital camera by using unique wide gamut color generator based on LED diodes

    NASA Astrophysics Data System (ADS)

    Kretkowski, M.; Shimodaira, Y.; Jabłoński, R.

    2008-11-01

    Development of a high accuracy color reproduction system requires certain instrumentation and reference for color calibration. Our research led to development of a high fidelity color digital camera with implemented filters that realize the color matching functions. The output signal returns XYZ values which provide absolute description of color. In order to produce XYZ output a mathematical conversion must be applied to CCD output values introducing a conversion matrix. The conversion matrix coefficients are calculated by using a color reference with known XYZ values and corresponding output signals from the CCD sensor under each filter acquisition from a certain amount of color samples. The most important feature of the camera is its ability to acquire colors from the complete theoretically visible color gamut due to implemented filters. However market available color references such as various color checkers are enclosed within HDTV gamut, which is insufficient for calibration in the whole operating color range. This led to development of a unique color reference based on LED diodes called the LED Color Generator (LED CG). It is capable of displaying colors in a wide color gamut estimated by chromaticity coordinates of 12 primary colors. The total amount of colors possible to produce is 25512. The biggest advantage is a possibility of displaying colors with desired spectral distribution (with certain approximations) due to multiple primary colors it consists. The average color difference obtained for test colors was found to be ▵E~0.78 for calibration with LED CG. The result is much better and repetitive in comparison with the Macbeth ColorCheckerTM which typically gives ▵E~1.2 and in the best case ▵E~0.83 with specially developed techniques.

  15. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  16. Development and Application of a new DACOM Airborne Trace Gas Instrument based on Room-Temperature Laser and Detector Technology and all-Digital Control and Data Processin

    NASA Astrophysics Data System (ADS)

    Diskin, G. S.; Sachse, G. W.; DiGangi, J. P.; Pusede, S. E.; Slate, T. A.; Rana, M.

    2014-12-01

    The DACOM (Differential Absorption Carbon monOxide Measurements) instrument has been used for airborne measurements of carbon monoxide, methane, and nitrous oxide for nearly four decades. Over the years, the instrument has undergone a nearly continuous series of modifications, taking advantage of improvements in available technology and the benefits of experience, but always utilizing cryogenically cooled lasers and detectors. More recently, though, the availability of room-temperature, higher-power single-mode lasers at the mid-infrared wavelengths used by DACOM has made it possible to replace both the cryogenic lasers and detectors with thermoelectrically cooled versions. And the relative stability of these lasers has allowed us to incorporate an all-digital wavelength stabilization technique developed previously for the Diode Laser Hygrometer (DLH) instrument. The new DACOM flew first in the summer 2013 SEAC4RS campaign, measuring CO from the DC-8 aircraft, and more recently measuring all three gases from the NASA P-3B aircraft in support of the summer 2014 DISCOVER-AQ campaign. We will present relevant aspects of the new instrument design and operation as well as selected data from recent campaigns illustrating instrument performance and some preliminary science.

  17. Validating NASA's Airborne Multikilohertz Microlaser Altimeter (Microaltimeter) by Direct Comparison of Data Taken Over Ocean City, Maryland Against an Existing Digital Elevation Model

    NASA Technical Reports Server (NTRS)

    Abel, Peter

    2003-01-01

    NASA's Airborne Multikilohertz Microlaser Altimeter (Microaltimeter) is a scanning, photon-counting laser altimeter, which uses a low energy (less than 10 microJuoles), high repetition rate (approximately 10 kHz) laser, transmitting at 532 nm. A 14 cm diameter telescope images the ground return onto a segmented anode photomultiplier, which provides up to 16 range returns for each fire. Multiple engineering flights were made during 2001 and 2002 over the Maryland and Virginia coastal area, all during daylight hours. Post-processing of the data to geolocate the laser footprint and determine the terrain height requires post- detection Poisson filtering techniques to extract the actual ground returns from the noise. Validation of the instrument's ability to produce accurate terrain heights will be accomplished by direct comparison of data taken over Ocean City, Maryland with a Digital Elevation Model (DEM) of the region produced at Ohio State University (OSU) from other laser altimeter and photographic sources. The techniques employed to produce terrain heights from the Microaltimeter ranges will be shown, along with some preliminary comparisons with the OSU DEM.

  18. Quantitative Single-Particle Digital Autoradiography with α-Particle Emitters for Targeted Radionuclide Therapy using the iQID Camera

    SciTech Connect

    Miller, Brian W.; Frost, Sophia; Frayo, Shani; Kenoyer, Aimee L.; Santos, E. B.; Jones, Jon C.; Green, Damian J.; Hamlin, Donald K.; Wilbur, D. Scott; Fisher, Darrell R.; Orozco, Johnnie J.; Press, Oliver W.; Pagel, John M.; Sandmaier, B. M.

    2015-07-01

    Abstract Alpha emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50–80 μm) causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with alpha emitters may inactivate targeted cells with minimal radiation damage to surrounding tissues. For accurate dosimetry in alpha-RIT, tools are needed to visualize and quantify the radioactivity distribution and absorbed dose to targeted and non-targeted cells, especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, iQID (ionizing-radiation Quantum Imaging Detector), for use in alpha-RIT experiments. Methods: The iQID camera is a scintillator-based radiation detection technology that images and identifies charged-particle and gamma-ray/X-ray emissions spatially and temporally on an event-by-event basis. It employs recent advances in CCD/CMOS cameras and computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, we evaluated this system’s characteristics for alpha particle imaging including measurements of spatial resolution and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 (211At) activity distributions in cryosections of murine and canine tissue samples. Results: The highest spatial resolution was measured at ~20 μm full width at half maximum (FWHM) and the alpha particle background was measured at a rate of (2.6 ± 0.5) × 10–4 cpm/cm2 (40 mm diameter detector area). Simultaneous imaging of multiple tissue sections was performed using a large-area iQID configuration (ø 11.5 cm

  19. Use of a digital camera onboard an unmanned aerial vehicle to monitor spring phenology at individual tree level

    NASA Astrophysics Data System (ADS)

    Berra, Elias; Gaulton, Rachel; Barr, Stuart

    2016-04-01

    The monitoring of forest phenology, in a cost-effective manner, at a fine spatial scale and over relatively large areas remains a significant challenge. To address this issue, unmanned aerial vehicles (UAVs) appear as a potential new option for forest phenology monitoring. The aim of this study is to assess the potential of imagery acquired from a UAV to track seasonal changes in leaf canopy at individual tree level. UAV flights, deploying consumer-grade standard and near-infrared modified cameras, were carried out over a deciduous woodland during the spring season of 2015, from which a temporal series of calibrated and georeferenced 5 cm spatial resolution orthophotos was generated. Initial results from a subset of trees are presented in this paper. Four trees with different observed Start of Season (SOS) dates were selected to monitor UAV-derived Green Chromatic Coordinate (GCC), as a measure of canopy greenness. Mean GCC values were extracted from within the four individual tree crowns and were plotted against the day of year (DOY) when the data were acquired. The temporal GCC trajectory of each tree was associated with the visual observations of leaf canopy phenology (SOS) and also with the development of understory vegetation. The chronological order when sudden increases of GCC values occurred matched with the chronological order of observed SOS: the first sudden increase in GCC was detected in the tree which first reached SOS; 18.5 days later (on average) the last sudden increase of GCC was detected in the tree which last reached SOS (18 days later than the first one). Trees with later observed SOS presented GCC values increasing slowly over time, which were associated with development of understory vegetation. Ongoing work is dealing with: 1) testing different indices; 2) radiometric calibration (retrieving of spectral reflectance); 3) expanding the analysis to more tree individuals, more tree species and over larger forest areas, and; 4) deriving

  20. High-resolution digital elevation model of lower Cowlitz and Toutle Rivers, adjacent to Mount St. Helens, Washington, based on an airborne lidar survey of October 2007

    USGS Publications Warehouse

    Mosbrucker, Adam

    2015-01-01

    The lateral blast, debris avalanche, and lahars of the May 18th, 1980, eruption of Mount St. Helens, Washington, dramatically altered the surrounding landscape. Lava domes were extruded during the subsequent eruptive periods of 1980–1986 and 2004–2008. More than three decades after the emplacement of the 1980 debris avalanche, high sediment production persists in the Toutle River basin, which drains the northern and western flanks of the volcano. Because this sediment increases the risk of flooding to downstream communities on the Toutle and lower Cowlitz Rivers, the U.S. Army Corps of Engineers (USACE), under the direction of Congress to maintain an authorized level of flood protection, continues to monitor and mitigate excess sediment in North and South Fork Toutle River basins to help reduce this risk and to prevent sediment from clogging the shipping channel of the Columbia River. From October 22–27, 2007, Watershed Sciences, Inc., under contract to USACE, collected high-precision airborne lidar (light detection and ranging) data that cover 273 square kilometers (105 square miles) of lower Cowlitz and Toutle River tributaries from the Columbia River at Kelso, Washington, to upper North Fork Toutle River (below the volcano's edifice), including lower South Fork Toutle River. These data provide a digital dataset of the ground surface, including beneath forest cover. Such remotely sensed data can be used to develop sediment budgets and models of sediment erosion, transport, and deposition. The U.S. Geological Survey (USGS) used these lidar data to develop digital elevation models (DEMs) of the study area. DEMs are fundamental to monitoring natural hazards and studying volcanic landforms, fluvial and glacial geomorphology, and surface geology. Watershed Sciences, Inc., provided files in the LASer (LAS) format containing laser returns that had been filtered, classified, and georeferenced. The USGS produced a hydro-flattened DEM from ground-classified points at

  1. High-resolution digital elevation model of Mount St. Helens crater and upper North Fork Toutle River basin, Washington, based on an airborne lidar survey of September 2009

    USGS Publications Warehouse

    Mosbrucker, Adam

    2014-01-01

    The lateral blast, debris avalanche, and lahars of the May 18th, 1980, eruption of Mount St. Helens, Washington, dramatically altered the surrounding landscape. Lava domes were extruded during the subsequent eruptive periods of 1980–1986 and 2004–2008. More than three decades after the emplacement of the 1980 debris avalanche, high sediment production persists in the North Fork Toutle River basin, which drains the northern flank of the volcano. Because this sediment increases the risk of flooding to downstream communities on the Toutle and Cowlitz Rivers, the U.S. Army Corps of Engineers (USACE), under the direction of Congress to maintain an authorized level of flood protection, built a sediment retention structure on the North Fork Toutle River in 1989 to help reduce this risk and to prevent sediment from clogging the shipping channel of the Columbia River. From September 16–20, 2009, Watershed Sciences, Inc., under contract to USACE, collected high-precision airborne lidar (light detection and ranging) data that cover 214 square kilometers (83 square miles) of Mount St. Helens and the upper North Fork Toutle River basin from the sediment retention structure to the volcano's crater. These data provide a digital dataset of the ground surface, including beneath forest cover. Such remotely sensed data can be used to develop sediment budgets and models of sediment erosion, transport, and deposition. The U.S. Geological Survey (USGS) used these lidar data to develop digital elevation models (DEMs) of the study area. DEMs are fundamental to monitoring natural hazards and studying volcanic landforms, fluvial and glacial geomorphology, and surface geology. Watershed Sciences, Inc., provided files in the LASer (LAS) format containing laser returns that had been filtered, classified, and georeferenced. The USGS produced a hydro-flattened DEM from ground-classified points at Castle, Coldwater, and Spirit Lakes. Final results averaged about five laser last

  2. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  3. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  4. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  5. A Low Noise, Microprocessor-Controlled, Internally Digitizing Rotating-Vane Electric Field Mill for Airborne Platforms

    NASA Technical Reports Server (NTRS)

    Bateman, M. G.; Stewart, M. F.; Blakeslee, R. J.; Podgorny, s. J.; Christian, H. J.; Mach, D. M.; Bailey, J. C.; Daskar, D.

    2006-01-01

    This paper reports on a new generation of aircraft-based rotating-vane style electric field mills designed and built at NASA's Marshall Spaceflight Center. The mills have individual microprocessors that digitize the electric field signal at the mill and respond to commands from the data system computer. The mills are very sensitive (1 V/m per bit), have a wide dynamic range (115 dB), and are very low noise (+/-1 LSB). Mounted on an aircraft, these mills can measure fields from +/-1 V/m to +/-500 kV/m. Once-per-second commanding from the data collection computer to each mill allows for precise timing and synchronization. The mills can also be commanded to execute a self-calibration in flight, which is done periodically to monitor the status and health of each mill.

  6. Multi-illumination Gabor holography recorded in a single camera snap-shot for high-resolution phase retrieval in digital in-line holographic microscopy

    NASA Astrophysics Data System (ADS)

    Sanz, Martin; Picazo-Bueno, Jose A.; Garcia, Javier; Micó, Vicente

    2015-05-01

    In this contribution we introduce MISHELF microscopy, a new concept and design of a lensless holographic microscope based on wavelength multiplexing, single hologram acquisition and digital image processing. The technique which name comes from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel microscopy, is based on the simultaneous illumination and recording of three diffraction patterns in the Fresnel domain. In combination with a novel and fast iterative phase retrieval algorithm, MISHELF microscopy is capable of high-resolution (micron range) phase-retrieved (twin image elimination) biological imaging of dynamic events (video rate recording speed) since it avoids the time multiplexing needed for the in-line hologram sequence recording when using conventional phase-shifting or phase retrieval algorithms. MISHELF microscopy is validated using two different experimental layouts: one using RGB illumination and detection schemes and another using IRRB as illumination while keeping the RGB color camera as detection device. Preliminary experimental results are provided for both experimental layouts using a synthetic object (USAF resolution test target).

  7. Assessment of the Spatial Co-registration of Multitemporal Imagery from Large Format Digital Cameras in the Context of Detailed Change Detection

    PubMed Central

    Coulter, Lloyd L.; Stow, Douglas A.

    2008-01-01

    Large format digital camera (LFDC) systems are becoming more broadly available and regularly collect image data over large areas. Spectral and radiometric attributes of imagery from LFDC systems make this type of image data appropriate for semi-automated change detection. However, achieving accurate spatial co-registration between multitemporal image sets is necessary for semi-automated change detection. This study investigates the accuracy of co-registration between multitemporal image sets acquired using the Leica Geosystems ADS40, Intergraph Z/I Imaging® DMC, and Vexcel UltraCam-D sensors in areas of gentle, moderate, and extreme terrain relief. Custom image sets were collected and orthorectified by imagery vendors, with guidance from the authors. Results indicate that imagery acquired by vendors operating LFDC systems may be co- registered with pixel or sub-pixel level accuracy, even for environments with high terrain relief. Specific image acquisition and processing procedures facilitating this level of co- registration are discussed. PMID:27879815

  8. Airborne system for testing multispectral reconnaissance technologies

    NASA Astrophysics Data System (ADS)

    Schmitt, Dirk-Roger; Doergeloh, Heinrich; Keil, Heiko; Wetjen, Wilfried

    1999-07-01

    There is an increasing demand for future airborne reconnaissance systems to obtain aerial images for tactical or peacekeeping operations. Especially Unmanned Aerial Vehicles (UAVs) equipped with multispectral sensor system and with real time jam resistant data transmission capabilities are of high interest. An airborne experimental platform has been developed as testbed to investigate different concepts of reconnaissance systems before their application in UAVs. It is based on a Dornier DO 228 aircraft, which is used as flying platform. Great care has been taken to achieve the possibility to test different kinds of multispectral sensors. Hence basically it is capable to be equipped with an IR sensor head, high resolution aerial cameras of the whole optical spectrum and radar systems. The onboard equipment further includes system for digital image processing, compression, coding, and storage. The data are RF transmitted to the ground station using technologies with high jam resistance. The images, after merging with enhanced vision components, are delivered to the observer who has an uplink data channel available to control flight and imaging parameters.

  9. Blind camera fingerprinting and image clustering.

    PubMed

    Bloy, Greg J

    2008-03-01

    Previous studies have shown how to "fingerprint" a digital camera given a set of images known to come from the camera. A clustering technique is proposed to construct such fingerprints from a mixed set of images, enabling identification of each image's source camera without any prior knowledge of source.

  10. Time-resolved imaging of prompt-gamma rays for proton range verification using a knife-edge slit camera based on digital photon counters.

    PubMed

    Cambraia Lopes, Patricia; Clementel, Enrico; Crespo, Paulo; Henrotin, Sebastien; Huizenga, Jan; Janssens, Guillaume; Parodi, Katia; Prieels, Damien; Roellinghoff, Frauke; Smeets, Julien; Stichelbaut, Frederic; Schaart, Dennis R

    2015-08-07

    Proton range monitoring may facilitate online adaptive proton therapy and improve treatment outcomes. Imaging of proton-induced prompt gamma (PG) rays using a knife-edge slit collimator is currently under investigation as a potential tool for real-time proton range monitoring. A major challenge in collimated PG imaging is the suppression of neutron-induced background counts. In this work, we present an initial performance test of two knife-edge slit camera prototypes based on arrays of digital photon counters (DPCs). PG profiles emitted from a PMMA target upon irradiation with a 160 MeV proton pencil beams (about 6.5 × 10(9) protons delivered in total) were measured using detector modules equipped with four DPC arrays coupled to BGO or LYSO : Ce crystal matrices. The knife-edge slit collimator and detector module were placed at 15 cm and 30 cm from the beam axis, respectively, in all cases. The use of LYSO : Ce enabled time-of-flight (TOF) rejection of background events, by synchronizing the DPC readout electronics with the 106 MHz radiofrequency signal of the cyclotron. The signal-to-background (S/B) ratio of 1.6 obtained with a 1.5 ns TOF window and a 3 MeV-7 MeV energy window was about 3 times higher than that obtained with the same detector module without TOF discrimination and 2 times higher than the S/B ratio obtained with the BGO module. Even 1 mm shifts of the Bragg peak position translated into clear and consistent shifts of the PG profile if TOF discrimination was applied, for a total number of protons as low as about 6.5 × 10(8) and a detector surface of 6.6 cm × 6.6 cm.

  11. Multispectral imaging of absorption and scattering properties of in vivo exposed rat brain using a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Yoshida, Keiichiro; Nishidate, Izumi; Ishizuka, Tomohiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu

    2015-05-01

    In order to estimate multispectral images of the absorption and scattering properties in the cerebral cortex of in vivo rat brain, we investigated spectral reflectance images estimated by the Wiener estimation method using a digital RGB camera. A Monte Carlo simulation-based multiple regression analysis for the corresponding spectral absorbance images at nine wavelengths (500, 520, 540, 560, 570, 580, 600, 730, and 760 nm) was then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentrations of oxygenated hemoglobin and that of deoxygenated hemoglobin were estimated as the absorption parameters, whereas the coefficient a and the exponent b of the reduced scattering coefficient spectrum approximated by a power law function were estimated as the scattering parameters. The spectra of absorption and reduced scattering coefficients were reconstructed from the absorption and scattering parameters, and the spectral images of absorption and reduced scattering coefficients were then estimated. In order to confirm the feasibility of this method, we performed in vivo experiments on exposed rat brain. The estimated images of the absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of the reduced scattering coefficients had a broad scattering spectrum, exhibiting a larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. The changes in the estimated absorption and scattering parameters during normoxia, hyperoxia, and anoxia indicate the potential applicability of the method by which to evaluate the pathophysiological conditions of in vivo brain due to the loss of tissue viability.

  12. Multispectral imaging of absorption and scattering properties of in vivo exposed rat brain using a digital red-green-blue camera.

    PubMed

    Yoshida, Keiichiro; Nishidate, Izumi; Ishizuka, Tomohiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu

    2015-05-01

    In order to estimate multispectral images of the absorption and scattering properties in the cerebral cortex of in vivo rat brain, we investigated spectral reflectance images estimated by the Wiener estimation method using a digital RGB camera. A Monte Carlo simulation-based multiple regression analysis for the corresponding spectral absorbance images at nine wavelengths (500, 520, 540, 560, 570, 580, 600, 730, and 760 nm) was then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentrations of oxygenated hemoglobin and that of deoxygenated hemoglobin were estimated as the absorption parameters, whereas the coefficient a and the exponent b of the reduced scattering coefficient spectrum approximated by a power law function were estimated as the scattering parameters. The spectra of absorption and reduced scattering coefficients were reconstructed from the absorption and scattering parameters, and the spectral images of absorption and reduced scattering coefficients were then estimated. In order to confirm the feasibility of this method, we performed in vivo experiments on exposed rat brain. The estimated images of the absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of the reduced scattering coefficients had a broad scattering spectrum, exhibiting a larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. The changes in the estimated absorption and scattering parameters during normoxia, hyperoxia, and anoxia indicate the potential applicability of the method by which to evaluate the pathophysiological conditions of in vivo brain due to the loss of tissue viability.

  13. New portable system for dental plaque measurement using a digital single-lens reflex camera and image analysis: Study of reliability and validation

    PubMed Central

    Rosa, Guillermo Martin; Elizondo, Maria Lidia

    2015-01-01

    Background: The quantification of the dental plaque (DP) by indices has limitations: They depend on the subjective operator's evaluation and are measured in an ordinal scale. The purpose of this study was to develop and evaluate a method to measure DP in a proportional scale. Materials and Methods: A portable photographic positioning device (PPPD) was designed and added to a photographic digital single-lens reflex camera. Seventeen subjects participated in this study, after DP disclosure with the erythrosine, their incisors, and a calibration scale ware photographed by two operators in duplicate, re-positioning the PPPD among each acquisition. A third operator registered the Quigley-Hein modified by Turesky DP index (Q-H/TPI). After tooth brushing, the same operators repeated the photographs and the Q-H/TPI. The image analysis system (IAS) technique allowed the measurement in mm2 of the vestibular total tooth area and the area with DP. Results: The reliability was determined with the intra-class correlation coefficient that was 0.9936 (P < 0.05) for the intra-operator repeatability and 0.9931 (P < 0.05) for inter-operator reproducibility. The validity was assessed using the Spearman's correlation coefficient that indicated a strong positive correlation with the Q-H/TPI rs = 0.84 (P < 0.01). The sensitivity of the IAS was evaluated with two sample sizes, only the IAS was able to detect significant differences (P < 0.05) with the sample of smaller size (n = 8). Conclusions: Image analysis system showed to be a reliable and valid method to measure the quantity of DP in a proportional scale, allowing a more powerful statistical analysis, thus facilitating trials with a smaller sample size. PMID:26229267

  14. Real time orthorectification of high resolution airborne pushbroom imagery

    NASA Astrophysics Data System (ADS)

    Reguera-Salgado, Javier; Martin-Herrero, Julio

    2011-11-01

    Advanced architectures have been proposed for efficient orthorectification of digital airborne camera images, including a system based on GPU processing and distributed computing able to geocorrect three digital still aerial photographs per second. Here, we address the computationally harder problem of geocorrecting image data from airborne pushbroom sensors, where each individual image line has associated its own camera attitude and position parameters. Using OpenGL and CUDA interoperability and projective texture techniques, originally developed for fast shadow rendering, image data is projected onto a Digital Terrain Model (DTM) as if by a slide projector placed and rotated in accordance with GPS position and inertial navigation (IMU) data. Each line is sequentially projected onto the DTM to generate an intermediate frame, consisting of a unique projected line shaped by the DTM relief. The frames are then merged into a geometrically corrected georeferenced orthoimage. To target hyperband systems, avoiding the high dimensional overhead, we deal with an orthoimage of pixel placeholders pointing to the raw image data, which are then combined as needed for visualization or processing tasks. We achieved faster than real-time performance in a hyperspectral pushbroom system working at a line rate of 30 Hz with 200 bands and 1280 pixel wide swath over a 1 m grid DTM, reaching a minimum processing speed of 356 lines per second (up to 511 lps), over eleven (up to seventeen) times the acquisition rate. Our method also allows the correction of systematic GPS and/or IMU biases by means of 3D user interactive navigation.

  15. Time-resolved imaging of prompt-gamma rays for proton range verification using a knife-edge slit camera based on digital photon counters

    NASA Astrophysics Data System (ADS)

    Cambraia Lopes, Patricia; Clementel, Enrico; Crespo, Paulo; Henrotin, Sebastien; Huizenga, Jan; Janssens, Guillaume; Parodi, Katia; Prieels, Damien; Roellinghoff, Frauke; Smeets, Julien; Stichelbaut, Frederic; Schaart, Dennis R.

    2015-08-01

    Proton range monitoring may facilitate online adaptive proton therapy and improve treatment outcomes. Imaging of proton-induced prompt gamma (PG) rays using a knife-edge slit collimator is currently under investigation as a potential tool for real-time proton range monitoring. A major challenge in collimated PG imaging is the suppression of neutron-induced background counts. In this work, we present an initial performance test of two knife-edge slit camera prototypes based on arrays of digital photon counters (DPCs). PG profiles emitted from a PMMA target upon irradiation with a 160 MeV proton pencil beams (about 6.5   ×   109 protons delivered in total) were measured using detector modules equipped with four DPC arrays coupled to BGO or LYSO : Ce crystal matrices. The knife-edge slit collimator and detector module were placed at 15 cm and 30 cm from the beam axis, respectively, in all cases. The use of LYSO : Ce enabled time-of-flight (TOF) rejection of background events, by synchronizing the DPC readout electronics with the 106 MHz radiofrequency signal of the cyclotron. The signal-to-background (S/B) ratio of 1.6 obtained with a 1.5 ns TOF window and a 3 MeV-7 MeV energy window was about 3 times higher than that obtained with the same detector module without TOF discrimination and 2 times higher than the S/B ratio obtained with the BGO module. Even 1 mm shifts of the Bragg peak position translated into clear and consistent shifts of the PG profile if TOF discrimination was applied, for a total number of protons as low as about 6.5   ×   108 and a detector surface of 6.6 cm  ×  6.6 cm.

  16. Polarimetric sensor systems for airborne ISR

    NASA Astrophysics Data System (ADS)

    Chenault, David; Foster, Joseph; Pezzaniti, Joseph; Harchanko, John; Aycock, Todd; Clark, Alex

    2014-06-01

    Over the last decade, polarimetric imaging technologies have undergone significant advancements that have led to the development of small, low-power polarimetric cameras capable of meeting current airborne ISR mission requirements. In this paper, we describe the design and development of a compact, real-time, infrared imaging polarimeter, provide preliminary results demonstrating the enhanced contrast possible with such a system, and discuss ways in which this technology can be integrated with existing manned and unmanned airborne platforms.

  17. Airborne Imagery Collections Barrow 2013

    DOE Data Explorer

    Cherry, Jessica; Crowder, Kerri

    2015-07-20

    The data here are orthomosaics, digital surface models (DSMs), and individual frames captured during low altitude airborne flights in 2013 at the Barrow Environmental Observatory. The orthomosaics, thermal IR mosaics, and DSMs were generated from the individual frames using Structure from Motion techniques.

  18. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  19. Range-Gated LADAR Coherent Imaging Using Parametric Up-Conversion of IR and NIR Light for Imaging with a Visible-Range Fast-Shuttered Intensified Digital CCD Camera

    SciTech Connect

    YATES,GEORGE J.; MCDONALD,THOMAS E. JR.; BLISS,DAVID E.; CAMERON,STEWART M.; ZUTAVERN,FRED J.

    2000-12-20

    Research is presented on infrared (IR) and near infrared (NIR) sensitive sensor technologies for use in a high speed shuttered/intensified digital video camera system for range-gated imaging at ''eye-safe'' wavelengths in the region of 1.5 microns. The study is based upon nonlinear crystals used for second harmonic generation (SHG) in optical parametric oscillators (OPOS) for conversion of NIR and IR laser light to visible range light for detection with generic S-20 photocathodes. The intensifiers are ''stripline'' geometry 18-mm diameter microchannel plate intensifiers (MCPIIS), designed by Los Alamos National Laboratory and manufactured by Philips Photonics. The MCPIIS are designed for fast optical shattering with exposures in the 100-200 ps range, and are coupled to a fast readout CCD camera. Conversion efficiency and resolution for the wavelength conversion process are reported. Experimental set-ups for the wavelength shifting and the optical configurations for producing and transporting laser reflectance images are discussed.

  20. Cardiac cameras.

    PubMed

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  1. The future of consumer cameras

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  2. Characterization of the Series 1000 Camera System

    SciTech Connect

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  3. The Geospectral Camera: a Compact and Geometrically Precise Hyperspectral and High Spatial Resolution Imager

    NASA Astrophysics Data System (ADS)

    Delauré, B.; Michiels, B.; Biesemans, J.; Livens, S.; Van Achteren, T.

    2013-04-01

    Small unmanned aerial vehicles are increasingly being employed for environmental monitoring at local scale, which drives the demand for compact and lightweight spectral imagers. This paper describes the geospectral camera, which is a novel compact imager concept. The camera is built around an innovative detector which has two sensor elements on a single chip and therefore offers the functionality of two cameras within the volume of a single one. The two sensor elements allow the camera to derive both spectral information as well as geometric information (high spatial resolution imagery and a digital surface model) of the scene of interest. A first geospectral camera prototype has been developed. It uses a linear variable optical filter which is installed in front of one of the two sensors of the MEDUSA CMOS imager chip. A accompanying software approach has been developed which exploits the simultaneous information of the two sensors in order to extract an accurate spectral image product. This method has been functionally demonstrated by applying it on image data acquired during an airborne acquisition.

  4. Fourth Airborne Geoscience Workshop

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The focus of the workshop was on how the airborne community can assist in achieving the goals of the Global Change Research Program. The many activities that employ airborne platforms and sensors were discussed: platforms and instrument development; airborne oceanography; lidar research; SAR measurements; Doppler radar; laser measurements; cloud physics; airborne experiments; airborne microwave measurements; and airborne data collection.

  5. Airborne Particles.

    ERIC Educational Resources Information Center

    Ojala, Carl F.; Ojala, Eric J.

    1987-01-01

    Describes an activity in which students collect airborne particles using a common vacuum cleaner. Suggests ways for the students to convert their data into information related to air pollution and human health. Urges consideration of weather patterns when analyzing the results of the investigation. (TW)

  6. Airborne Imagery

    NASA Technical Reports Server (NTRS)

    1983-01-01

    ATM (Airborne Thematic Mapper) was developed for NSTL (National Space Technology Companies) by Daedalus Company. It offers expanded capabilities for timely, accurate and cost effective identification of areas with prospecting potential. A related system is TIMS, Thermal Infrared Multispectral Scanner. Originating from Landsat 4, it is also used for agricultural studies, etc.

  7. Mars Observer camera

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Ravine, M. A.; Soulanille, T. A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the 'push broom' technique; that is, they do not take 'frames' but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope for taking extremely high resolution pictures of selected locations on Mars. Using the narrow-angle camera, areas ranging from 2.8 km x 2.8 km to 2.8 km x 25.2 km (depending on available internal digital buffer memory) can be photographed at about 1.4 m/pixel. Additionally, lower-resolution pictures (to a lowest resolution of about 11 m/pixel) can be acquired by pixel averaging; these images can be much longer, ranging up to 2.8 x 500 km at 11 m/pixel. High-resolution data will be used to study sediments and sedimentary processes, polar processes and deposits, volcanism, and other geologic/geomorphic processes.

  8. Using Digital Imaging in Classroom and Outdoor Activities.

    ERIC Educational Resources Information Center

    Thomasson, Joseph R.

    2002-01-01

    Explains how to use digital cameras and related basic equipment during indoor and outdoor activities. Uses digital imaging in general botany class to identify unknown fungus samples. Explains how to select a digital camera and other necessary equipment. (YDS)

  9. CCD Camera

    DOEpatents

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  10. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  11. Camera Calibration Accuracy at Different Uav Flying Heights

    NASA Astrophysics Data System (ADS)

    Yusoff, A. R.; Ariff, M. F. M.; Idris, K. M.; Majid, Z.; Chong, A. K.

    2017-02-01

    Unmanned Aerial Vehicles (UAVs) can be used to acquire highly accurate data in deformation survey, whereby low-cost digital cameras are commonly used in the UAV mapping. Thus, camera calibration is considered important in obtaining high-accuracy UAV mapping using low-cost digital cameras. The main focus of this study was to calibrate the UAV camera at different camera distances and check the measurement accuracy. The scope of this study included camera calibration in the laboratory and on the field, and the UAV image mapping accuracy assessment used calibration parameters of different camera distances. The camera distances used for the image calibration acquisition and mapping accuracy assessment were 1.5 metres in the laboratory, and 15 and 25 metres on the field using a Sony NEX6 digital camera. A large calibration field and a portable calibration frame were used as the tools for the camera calibration and for checking the accuracy of the measurement at different camera distances. Bundle adjustment concept was applied in Australis software to perform the camera calibration and accuracy assessment. The results showed that the camera distance at 25 metres is the optimum object distance as this is the best accuracy obtained from the laboratory as well as outdoor mapping. In conclusion, the camera calibration at several camera distances should be applied to acquire better accuracy in mapping and the best camera parameter for the UAV image mapping should be selected for highly accurate mapping measurement.

  12. Estimation of the Atmospheric Refraction Effect in Airborne Images Using Radiosonde Data

    NASA Astrophysics Data System (ADS)

    Beisl, U.; Tempelmann, U.

    2016-06-01

    The influence of the atmospheric refraction on the geometric accuracy of airborne photogrammetric images was already considered in the days of analogue photography. The effect is a function of the varying refractive index on the path from the ground to the image sensor. Therefore the effect depends on the height over ground, the view zenith angle and the atmospheric constituents. It is leading to a gradual increase of the scale towards the borders of the image, i.e. a magnification takes place. Textbooks list a shift of several pixels at the borders of standard wide angle images. As it was the necessity of that time when images could only be acquired at good weather conditions, the effect was calculated using standard atmospheres for good atmospheric conditions, leading to simple empirical formulas. Often the pixel shift caused by refraction was approximated as linear with height and compensated by an adjustment of the focal length. With the advent of sensitive digital cameras, the image dynamics allows for capturing images at adverse weather conditions. So the influence of the atmospheric profiles on the geometric accuracy of the images has to be investigated and the validity of the standard correction formulas has to be checked. This paper compares the results from the standard formulas by Saastamoinen with the results calculated from a broad selection of atmospheres obtained from radiosonde profile data. The geometric deviation is calculated by numerical integration of the refractive index as a function of the height using the refractive index formula by Ciddor. It turns out that the effect of different atmospheric profiles (including inversion situations) is generally small compared to the overall effect except at low camera heights. But there the absolute deviation is small. Since the necessary atmospheric profile data are often not readily available for airborne images a formula proposed by Saastamoinen is verified that uses only camera height, the pressure

  13. Absolute airborne gravimetry

    NASA Astrophysics Data System (ADS)

    Baumann, Henri

    This work consists of a feasibility study of a first stage prototype airborne absolute gravimeter system. In contrast to relative systems, which are using spring gravimeters, the measurements acquired by absolute systems are uncorrelated and the instrument is not suffering from problems like instrumental drift, frequency response of the spring and possible variation of the calibration factor. The major problem we had to resolve were to reduce the influence of the non-gravitational accelerations included in the measurements. We studied two different approaches to resolve it: direct mechanical filtering, and post-processing digital compensation. The first part of the work describes in detail the different mechanical passive filters of vibrations, which were studied and tested in the laboratory and later in a small truck in movement. For these tests as well as for the airborne measurements an absolute gravimeter FG5-L from Micro-G Ltd was used together with an Inertial navigation system Litton-200, a vertical accelerometer EpiSensor, and GPS receivers for positioning. These tests showed that only the use of an optical table gives acceptable results. However, it is unable to compensate for the effects of the accelerations of the drag free chamber. The second part describes the strategy of the data processing. It is based on modeling the perturbing accelerations by means of GPS, EpiSensor and INS data. In the third part the airborne experiment is described in detail, from the mounting in the aircraft and data processing to the different problems encountered during the evaluation of the quality and accuracy of the results. In the part of data processing the different steps conducted from the raw apparent gravity data and the trajectories to the estimation of the true gravity are explained. A comparison between the estimated airborne data and those obtained by ground upward continuation at flight altitude allows to state that airborne absolute gravimetry is feasible and

  14. SITHON: An Airborne Fire Detection System Compliant with Operational Tactical Requirements

    PubMed Central

    Kontoes, Charalabos; Keramitsoglou, Iphigenia; Sifakis, Nicolaos; Konstantinidis, Pavlos

    2009-01-01

    In response to the urging need of fire managers for timely information on fire location and extent, the SITHON system was developed. SITHON is a fully digital thermal imaging system, integrating INS/GPS and a digital camera, designed to provide timely positioned and projected thermal images and video data streams rapidly integrated in the GIS operated by Crisis Control Centres. This article presents in detail the hardware and software components of SITHON, and demonstrates the first encouraging results of test flights over the Sithonia Peninsula in Northern Greece. It is envisaged that the SITHON system will be soon operated onboard various airborne platforms including fire brigade airplanes and helicopters as well as on UAV platforms owned and operated by the Greek Air Forces. PMID:22399963

  15. NEON Airborne Remote Sensing of Terrestrial Ecosystems

    NASA Astrophysics Data System (ADS)

    Kampe, T. U.; Leisso, N.; Krause, K.; Karpowicz, B. M.

    2012-12-01

    The National Ecological Observatory Network (NEON) is the continental-scale research platform that will collect information on ecosystems across the United States to advance our understanding and ability to forecast environmental change at the continental scale. One of NEON's observing systems, the Airborne Observation Platform (AOP), will fly an instrument suite consisting of a high-fidelity visible-to-shortwave infrared imaging spectrometer, a full waveform small footprint LiDAR, and a high-resolution digital camera on a low-altitude aircraft platform. NEON AOP is focused on acquiring data on several terrestrial Essential Climate Variables including bioclimate, biodiversity, biogeochemistry, and land use products. These variables are collected throughout a network of 60 sites across the Continental United States, Alaska, Hawaii and Puerto Rico via ground-based and airborne measurements. Airborne remote sensing plays a critical role by providing measurements at the scale of individual shrubs and larger plants over hundreds of square kilometers. The NEON AOP plays the role of bridging the spatial scales from that of individual organisms and stands to the scale of satellite-based remote sensing. NEON is building 3 airborne systems to facilitate the routine coverage of NEON sites and provide the capacity to respond to investigator requests for specific projects. The first NEON imaging spectrometer, a next-generation VSWIR instrument, was recently delivered to NEON by JPL. This instrument has been integrated with a small-footprint waveform LiDAR on the first NEON airborne platform (AOP-1). A series of AOP-1 test flights were conducted during the first year of NEON's construction phase. The goal of these flights was to test out instrument functionality and performance, exercise remote sensing collection protocols, and provide provisional data for algorithm and data product validation. These test flights focused the following questions: What is the optimal remote

  16. Cameras Monitor Spacecraft Integrity to Prevent Failures

    NASA Technical Reports Server (NTRS)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  17. Three-dimensional environment models from airborne laser radar data

    NASA Astrophysics Data System (ADS)

    Soderman, Ulf; Ahlberg, Simon; Elmqvist, Magnus; Persson, Asa

    2004-09-01

    Detailed 3D environment models for visualization and computer based analyses are important in many defence and homeland security applications, e.g. crisis management, mission planning and rehearsal, damage assessment, etc. The high resolution data from airborne laser radar systems for 3D sensing provide an excellent source of data for obtaining the information needed for many of these models. To utilise the 3D data provided by the laser radar systems however, efficient methods for data processing and environment model construction needs to be developed. In this paper we will present some results on the development of laser data processing methods, including methods for data classification, bare earth extraction, 3D-reconstruction of buildings, and identification of single trees and estimation of their position, height, canopy size and species. We will also show how the results can be used for the construction of detailed 3D environment models for military modelling and simulation applications. The methods use data from discrete return airborne laser radar systems and digital cameras.

  18. Extracting Roof Parameters and Heat Bridges Over the City of Oldenburg from Hyperspectral, Thermal, and Airborne Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Bannehr, L.; Luhmann, Th.; Piechel, J.; Roelfs, T.; Schmidt, An.

    2011-09-01

    Remote sensing methods are used to obtain different kinds of information about the state of the environment. Within the cooperative research project HiReSens, funded by the German BMBF, a hyperspectral scanner, an airborne laser scanner, a thermal camera, and a RGB-camera are employed on a small aircraft to determine roof material parameters and heat bridges of house tops over the city Oldenburg, Lower Saxony. HiReSens aims to combine various geometrical highly resolved data in order to achieve relevant evidence about the state of the city buildings. Thermal data are used to obtain the energy distribution of single buildings. The use of hyperspectral data yields information about material consistence of roofs. From airborne laser scanning data (ALS) digital surface models are inferred. They build the basis to locate the best orientations for solar panels of the city buildings. The combination of the different data sets offers the opportunity to capitalize synergies between differently working systems. Central goals are the development of tools for the collection of heat bridges by means of thermal data, spectral collection of roofs parameters on basis of hyperspectral data as well as 3D-capture of buildings from airborne lasers scanner data. Collecting, analyzing and merging of the data are not trivial especially not when the resolution and accuracy is aimed in the domain of a few decimetre. The results achieved need to be regarded as preliminary. Further investigations are still required to prove the accuracy in detail.

  19. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  20. Making Connections with Digital Data

    ERIC Educational Resources Information Center

    Leonard, William; Bassett, Rick; Clinger, Alicia; Edmondson, Elizabeth; Horton, Robert

    2004-01-01

    State-of-the-art digital cameras open up enormous possibilities in the science classroom, especially when used as data collectors. Because most high school students are not fully formal thinkers, the digital camera can provide a much richer learning experience than traditional observation. Data taken through digital images can make the…

  1. Airborne Crowd Density Estimation

    NASA Astrophysics Data System (ADS)

    Meynberg, O.; Kuschk, G.

    2013-10-01

    This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.

  2. A Synergistic Approach to Atmospheric Compensation of Neon's Airborne Hyperspectral Imagery Utilizing an Airborne Solar Spectral Irradiance Radiometer

    NASA Astrophysics Data System (ADS)

    Wright, L.; Karpowicz, B. M.; Kindel, B. C.; Schmidt, S.; Leisso, N.; Kampe, T. U.; Pilewskie, P.

    2014-12-01

    A wide variety of critical information regarding bioclimate, biodiversity, and biogeochemistry is embedded in airborne hyperspectral imagery. Most, if not all of the primary signal relies upon first deriving the surface reflectance of land cover and vegetation from measured hyperspectral radiance. This places stringent requirements on terrain, and atmospheric compensation algorithms to accurately derive surface reflectance properties. An observatory designed to measure bioclimate, biodiversity, and biogeochemistry variables from surface reflectance must take great care in developing an approach which chooses algorithms with the highest accuracy, along with providing those algorithms with data necessary to describe the physical mechanisms that affect the measured at sensor radiance. The Airborne Observation Platform (AOP) part of the National Ecological Observatory Network (NEON) is developing such an approach. NEON is a continental-scale ecological observation platform designed to collect and disseminate data to enable the understanding and forecasting of the impacts of climate change, land use change, and invasive species on ecology. The instrumentation package used by the AOP includes a visible and shortwave infrared hyperspectral imager, waveform LiDAR, and high resolution (RGB) digital camera. In addition to airborne measurements, ground-based CIMEL sun photometers will be used to help characterize atmospheric aerosol loading, and ground validation measurements with field spectrometers will be made at select NEON sites. While the core instrumentation package provides critical information to derive surface reflectance of land surfaces and vegetation, the addition of a Solar Spectral Irradiance Radiometer (SSIR) is being investigated as an additional source of data to help identify and characterize atmospheric aerosol, and cloud contributions contributions to the radiance measured by the hyperspectral imager. The addition of the SSIR provides the opportunity to

  3. Enhancing Positioning Accuracy in Urban Terrain by Fusing Data from a GPS Receiver, Inertial Sensors, Stereo-Camera and Digital Maps for Pedestrian Navigation

    PubMed Central

    Przemyslaw, Baranski; Pawel, Strumillo

    2012-01-01

    The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian's steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS. PMID:22969321

  4. Solid state television camera has no imaging tube

    NASA Technical Reports Server (NTRS)

    Huggins, C. T.

    1972-01-01

    Camera with characteristics of vidicon camera and greater resolution than home TV receiver uses mosaic of phototransistors. Because of low power and small size, camera has many applications. Mosaics can be used as cathode ray tubes and analog-to-digital converters.

  5. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  6. The development of airborne video system for monitoring of river environments

    SciTech Connect

    Yoshikawa, Shigeya; Mizutani, Nobuyuki; Mizukami, Masumi; Koyano, Toshirou

    1996-11-01

    Recently, airborne videography is widely used by many monitoring for environmental resources, such as rivers, forests, ocean, and so on. Although airborne videography has a low resolution than aerial photographs, it can effectively reduce the cost of continuous monitoring of wide area. Furthermore video images can easily be processed with personal computer. This paper introduces an airborne video system for monitoring of Class A river environment. This system consists of two sub-systems. One is the data collection system that is composed of a video camera, a Global Positioning System(GPS) and a personal computer. This sub-system records information of rivers by video images and their corresponding location data. A GPS system is used for calculating location data and navigating the airplane to the destination of monitoring site. Other is a simplified digital video editing system. This system runs on a personal computer with Microsoft Windows 3.1. This system can also be used for management and planning of road environment, marine resources, forest resources and for prevention of disasters. 7 refs., 4 figs.

  7. Ground-based Nighttime Cloud Detection Using a Commercial Digital Camera: Observations at Manila Observatory (14.64N, 121.07E)

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Tan, F.; Antioquia, C. T.; Lagrosas, N.

    2014-12-01

    Cloud detection during nighttime poses a real problem to researchers because of a lack of optimum sensors that can specifically detect clouds during this time of the day. Hence, lidars and satellites are currently some of the instruments that are being utilized to determine cloud presence in the atmosphere. These clouds play a significant role in the night weather system for the reason that they serve as barriers of thermal radiation from the Earth and thereby reflecting this radiation back to the Earth. This effectively lowers the rate of decreasing temperature in the atmosphere at night. The objective of this study is to detect cloud occurrences at nighttime for the purpose of studying patterns of cloud occurrence and the effects of clouds on local weather. In this study, a commercial camera (Canon Powershot A2300) is operated continuously to capture nighttime clouds. The camera is situated inside a weather-proof box with a glass cover and is placed on the rooftop of the Manila Observatory building to gather pictures of the sky every 5min to observe cloud dynamics and evolution in the atmosphere. To detect pixels with clouds, the pictures are converted from its native JPEG to grayscale format. The pixels are then screened for clouds by looking at the values of pixels with and without clouds. In grayscale format, pixels with clouds have greater pixel values than pixels without clouds. Based on the observations, 0.34 of the maximum pixel value is enough to discern pixels with clouds from pixels without clouds. Figs. 1a & 1b are sample unprocessed pictures of cloudless night (May 22-23, 2014) and cloudy skies (May 23-24, 2014), respectively. Figs.1c and 1d show percentage of occurrence of nighttime clouds on May 22-23 and May 23-24, 2014, respectively. The cloud occurrence in a pixel is defined as the ratio of the number times when the pixel has clouds to the total number of observations. Fig. 1c shows less than 50% cloud occurrence while Fig. 1d shows cloud

  8. Toward a miniaturized fundus camera.

    PubMed

    Gliss, Christine; Parel, Jean-Marie; Flynn, John T; Pratisto, Hans; Niederer, Peter

    2004-01-01

    Retinopathy of prematurity (ROP) describes a pathological development of the retina in prematurely born children. In order to prevent severe permanent damage to the eye and enable timely treatment, the fundus of the eye in such children has to be examined according to established procedures. For these examinations, our miniaturized fundus camera is intended to allow the acquisition of wide-angle digital pictures of the fundus for on-line or off-line diagnosis and documentation. We designed two prototypes of a miniaturized fundus camera, one with graded refractive index (GRIN)-based optics, the other with conventional optics. Two different modes of illumination were compared: transscleral and transpupillary. In both systems, the size and weight of the camera were minimized. The prototypes were tested on young rabbits. The experiments led to the conclusion that the combination of conventional optics with transpupillary illumination yields the best results in terms of overall image quality.

  9. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  10. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  11. NASA IceBridge: Airborne surveys of the polar sea ice covers

    NASA Astrophysics Data System (ADS)

    Richter-Menge, J.; Farrell, S. L.

    2014-12-01

    The NASA Operation IceBridge (OIB) airborne sea ice surveys are designed to continue a valuable series of sea ice thickness measurements by bridging the gap between NASA's Ice, Cloud and Land Elevation Satellite (ICESat), which operated from 2003 to 2009, and ICESat-2, which is scheduled for launch in 2017. Initiated in 2009, OIB has conducted campaigns over the western Arctic Ocean (March/April) and Southern Oceans (October/November) on an annual basis. Primary OIB sensors being used for sea ice observations include the Airborne Topographic Mapper laser altimeter, the Digital Mapping System digital camera, a Ku-band radar altimeter, a frequency-modulated continuous-wave (FMCW) snow radar, and a KT-19 infrared radiation pyrometer. Data from the campaigns are available to the research community at: http://nsidc.org/data/icebridge/. This presentation will summarize the spatial and temporal extent of the campaigns and highlight key scientific accomplishments, which include: • Documented changes in the Arctic marine cryosphere since the dramatic sea ice loss of 2007 • Novel snow depth measurements over sea ice in the Arctic • Improved skill of April-to-September sea ice predictions via numerical ice/ocean models • Validation of satellite altimetry measurements (ICESat, CryoSat-2, and IceSat-2/MABEL)

  12. NASA IceBridge: Scientific Insights from Airborne Surveys of the Polar Sea Ice Covers

    NASA Astrophysics Data System (ADS)

    Richter-Menge, J.; Farrell, S. L.

    2015-12-01

    The NASA Operation IceBridge (OIB) airborne sea ice surveys are designed to continue a valuable series of sea ice thickness measurements by bridging the gap between NASA's Ice, Cloud and Land Elevation Satellite (ICESat), which operated from 2003 to 2009, and ICESat-2, which is scheduled for launch in 2017. Initiated in 2009, OIB has conducted campaigns over the western Arctic Ocean (March/April) and Southern Oceans (October/November) on an annual basis when the thickness of sea ice cover is nearing its maximum. More recently, a series of Arctic surveys have also collected observations in the late summer, at the end of the melt season. The Airborne Topographic Mapper (ATM) laser altimeter is one of OIB's primary sensors, in combination with the Digital Mapping System digital camera, a Ku-band radar altimeter, a frequency-modulated continuous-wave (FMCW) snow radar, and a KT-19 infrared radiation pyrometer. Data from the campaigns are available to the research community at: http://nsidc.org/data/icebridge/. This presentation will summarize the spatial and temporal extent of the OIB campaigns and their complementary role in linking in situ and satellite measurements, advancing observations of sea ice processes across all length scales. Key scientific insights gained on the state of the sea ice cover will be highlighted, including snow depth, ice thickness, surface roughness and morphology, and melt pond evolution.

  13. AWiFS camera for Resourcesat

    NASA Astrophysics Data System (ADS)

    Dave, Himanshu; Dewan, Chirag; Paul, Sandip; Sarkar, S. S.; Pandya, Himanshu; Joshi, S. R.; Mishra, Ashish; Detroja, Manoj

    2006-12-01

    Remote sensors were developed and used extensively world over using aircraft and space platforms. India has developed and launched many sensors into space to survey natural resources. The AWiFS is one such Camera, launched onboard Resourcesat-1 satellite by ISRO in 2003. It is a medium resolution camera with 5-day revisit designed for studies related to forestry, vegetation, soil, snow and disaster warning. The camera provides 56m (nadir) resolution from 817 km altitude in three visible bands and one SWIR band. This paper deals with configuration features of AWiFS Camera of Resourcesat-1, its onboard performance and also the highlights of Camera being developed for Resourcesat-2. The AWiFS is realized with two identical cameras viz. AWiFS-A and AWiFS-B, which cover the large field of view of 48°. Each camera consists of independent collecting optics and associated 6000 element detectors and electronics catering to 4 bands. The visible bands use linear Silicon CCDs, with 10μ × 7μ element while SWIR band uses 13μ staggered InGaAs linear active pixels. Camera Electronics are custom designed for each detector based on detector and system requirements. The camera covers the total dynamic range up to 100% albedo with a single gain setting and 12-bit digitization of which 10 MSBs are transmitted. The Camera saturation radiance of each band can also be selected through telecommand. The Camera provides very high SNR of about 700 near saturation. The camera components are housed in specially designed Invar structures. The AWiFS Camera onboard Resourcesat-1 is providing excellent imageries and the data is routinely used world over. AWiFS for Resourcesat-2 is being developed with overall performance specifications remaining same. The Camera electronics is miniaturized with reductions in hardware packages, size and weight to one third.

  14. Spherical Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Developed largely through a Small Business Innovation Research contract through Langley Research Center, Interactive Picture Corporation's IPIX technology provides spherical photography, a panoramic 360-degrees. NASA found the technology appropriate for use in guiding space robots, in the space shuttle and space station programs, as well as research in cryogenic wind tunnels and for remote docking of spacecraft. Images of any location are captured in their entirety in a 360-degree immersive digital representation. The viewer can navigate to any desired direction within the image. Several car manufacturers already use IPIX to give viewers a look at their latest line-up of automobiles. Another application is for non-invasive surgeries. By using OmniScope, surgeons can look more closely at various parts of an organ with medical viewing instruments now in use. Potential applications of IPIX technology include viewing of homes for sale, hotel accommodations, museum sites, news events, and sports stadiums.

  15. 75 FR 8112 - In the Matter of Certain Mobile Telephones and Wireless Communication Devices Featuring Digital...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-23

    ... COMMISSION In the Matter of Certain Mobile Telephones and Wireless Communication Devices Featuring Digital... communication devices featuring digital cameras, and components thereof by reason of infringement of certain... mobile telephones or wireless communication devices featuring digital cameras, or ] components...

  16. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  17. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  18. Highly Protable Airborne Multispectral Imaging System

    NASA Technical Reports Server (NTRS)

    Lehnemann, Robert; Mcnamee, Todd

    2001-01-01

    A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.

  19. Lights, camera, action research: The effects of didactic digital movie making on students' twenty-first century learning skills and science content in the middle school classroom

    NASA Astrophysics Data System (ADS)

    Ochsner, Karl

    Students are moving away from content consumption to content production. Short movies are uploaded onto video social networking sites and shared around the world. Unfortunately they usually contain little to no educational value, lack a narrative and are rarely created in the science classroom. According to new Arizona Technology standards and ISTE NET*S, along with the framework from the Partnership for 21st Century Learning Standards, our society demands students not only to learn curriculum, but to think critically, problem solve effectively, and become adept at communicating and collaborating. Didactic digital movie making in the science classroom may be one way that these twenty-first century learning skills may be implemented. An action research study using a mixed-methods approach to collect data was used to investigate if didactic moviemaking can help eighth grade students learn physical science content while incorporating 21st century learning skills of collaboration, communication, problem solving and critical thinking skills through their group production. Over a five week period, students researched lessons, wrote scripts, acted, video recorded and edited a didactic movie that contained a narrative plot to teach a science strand from the Arizona State Standards in physical science. A pretest/posttest science content test and KWL chart was given before and after the innovation to measure content learned by the students. Students then took a 21st Century Learning Skills Student Survey to measure how much they perceived that communication, collaboration, problem solving and critical thinking were taking place during the production. An open ended survey and a focus group of four students were used for qualitative analysis. Three science teachers used a project evaluation rubric to measure science content and production values from the movies. Triangulating the science content test, KWL chart, open ended questions and the project evaluation rubric, it

  20. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  1. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  2. Video indirect ophthalmoscopy using a hand-held video camera.

    PubMed

    Shanmugam, Mahesh P

    2011-01-01

    Fundus photography in adults and cooperative children is possible with a fundus camera or by using a slit lamp-mounted digital camera. Retcam TM or a video indirect ophthalmoscope is necessary for fundus imaging in infants and young children under anesthesia. Herein, a technique of converting and using a digital video camera into a video indirect ophthalmoscope for fundus imaging is described. This device will allow anyone with a hand-held video camera to obtain fundus images. Limitations of this technique involve a learning curve and inability to perform scleral depression.

  3. The all-sky camera revitalized.

    PubMed

    Oznovich, I; Yee, R; Schiffler, A; McEwen, D J; Sofko, G J

    1994-10-20

    An all-sky camera, a ground imager used since the 1950's in the aeronomy and space physics studies, was refurbished with a modern control, digitization, and archiving system. Monochromatic and broadband digital images of airglow and aurora are continuously integrated and recorded by the low-cost unmanned system, which is located in northern Canada. Radiometric corrections applied to the data include noise subtraction, normalization to a flat-field response, and absolute calibration. The images are geometrically corrected with star positions and projected onto a geographic or geomagnetic coordinate system. An illustration of the application of corrected all-sky camera images to the study of auroral spirals is given.

  4. Imagers for digital still photography

    NASA Astrophysics Data System (ADS)

    Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge

    2006-04-01

    This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.

  5. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  6. NEON: the first continental-scale ecological observatory with airborne remote sensing of vegetation canopy biochemistry and structure

    NASA Astrophysics Data System (ADS)

    Johnson, Brian R.; Kampe, Thomas U.; Kuester, Michele A.; Keller, Michael

    2009-08-01

    The National Ecological Observatory Network (NEON), being funded by the National Science Foundation, is a continental-scale research platform for discovering, understanding and forecasting the impacts of climate change, land-use change, and invasive species on ecology. Local site-based flux tower and field measurements will be coordinated with high resolution, regional airborne remote sensing observations. The NEON Airborne Observation Platform (AOP) consists of an aircraft platform carrying remote sensing instrumentation designed to achieve sub-meter to meter scale ground resolution to bridge scales from organism and stand scales to the scale of satellite based remote sensing. Data from the AOP will be openly available to the science community and will provide quantitative information on land use change, and changes in ecological structure and chemistry including the presence and effects of invasive species. Remote sensing instrumentation consists of an imaging spectrometer measuring surface reflectance over the continuous wavelength range from 400 to 2500 nm with 10 nm resolution, a scanning, small footprint waveform LiDAR for 3-D canopy structure measurements and a high resolution airborne digital camera. The AOP science objectives, key mission requirements, the conceptual design and development status are presented.

  7. NEON: the first continental-scale ecological observatory with airborne remote sensing of vegetation canopy biochemistry and structure

    NASA Astrophysics Data System (ADS)

    Kampe, Thomas U.; Johnson, Brian R.; Kuester, Michele; Keller, Michael

    2010-03-01

    The National Ecological Observatory Network (NEON) is an ecological observation platform for discovering, understanding and forecasting the impacts of climate change, land use change, and invasive species on continental-scale ecology. NEON will operate for 30 years and gather long-term data on ecological response changes and on feedbacks with the geosphere, hydrosphere, and atmosphere. Local ecological measurements at sites distributed within 20 ecoclimatic domains across the contiguous United States, Alaska, Hawaii, and Puerto Rico will be coordinated with high resolution, regional airborne remote sensing observations. The Airborne Observation Platform (AOP) is an aircraft platform carrying remote sensing instrumentation designed to achieve sub-meter to meter scale ground resolution, bridging scales from organisms and individual stands to satellite-based remote sensing. AOP instrumentation consists of a VIS/SWIR imaging spectrometer, a scanning small-footprint waveform LiDAR for 3-D canopy structure measurements and a high resolution airborne digital camera. AOP data will be openly available to scientists and will provide quantitative information on land use change and changes in ecological structure and chemistry including the presence and effects of invasive species. AOP science objectives, key mission requirements, and development status are presented including an overview of near-term risk-reduction and prototyping activities.

  8. Performance Evaluation of Thermographic Cameras for Photogrammetric Measurements

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Guler, E.

    2013-05-01

    The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was modelled efficiently

  9. Long-distance eye-safe laser TOF camera design

    NASA Astrophysics Data System (ADS)

    Kovalev, Anton V.; Polyakov, Vadim M.; Buchenkov, Vyacheslav A.

    2016-04-01

    We present a new TOF camera design based on a compact actively Q-switched diode pumped solid-state laser operating in 1.5 μm range and a receiver system based on a short wave infrared InGaAs PIN diodes focal plane array with an image intensifier and a special readout integration circuit. The compact camera is capable of depth imaging up to 4 kilometers with 10 frame/s and 1.2 m error. The camera could be applied for airborne and space geodesy location and navigation.

  10. Method for out-of-focus camera calibration.

    PubMed

    Bell, Tyler; Xu, Jing; Zhang, Song

    2016-03-20

    State-of-the-art camera calibration methods assume that the camera is at least nearly in focus and thus fail if the camera is substantially defocused. This paper presents a method which enables the accurate calibration of an out-of-focus camera. Specifically, the proposed method uses a digital display (e.g., liquid crystal display monitor) to generate fringe patterns that encode feature points into the carrier phase; these feature points can be accurately recovered, even if the fringe patterns are substantially blurred (i.e., the camera is substantially defocused). Experiments demonstrated that the proposed method can accurately calibrate a camera regardless of the amount of defocusing: the focal length difference is approximately 0.2% when the camera is focused compared to when the camera is substantially defocused.

  11. Experimental Advanced Airborne Research Lidar (EAARL) Data Processing Manual

    USGS Publications Warehouse

    Bonisteel, Jamie M.; Nayegandhi, Amar; Wright, C. Wayne; Brock, John C.; Nagle, David

    2009-01-01

    The Experimental Advanced Airborne Research Lidar (EAARL) is an example of a Light Detection and Ranging (Lidar) system that utilizes a blue-green wavelength (532 nanometers) to determine the distance to an object. The distance is determined by recording the travel time of a transmitted pulse at the speed of light (fig. 1). This system uses raster laser scanning with full-waveform (multi-peak) resolving capabilities to measure submerged topography and adjacent coastal land elevations simultaneously (Nayegandhi and others, 2009). This document reviews procedures for the post-processing of EAARL data using the custom-built Airborne Lidar Processing System (ALPS). ALPS software was developed in an open-source programming environment operated on a Linux platform. It has the ability to combine the laser return backscatter digitized at 1-nanosecond intervals with aircraft positioning information. This solution enables the exploration and processing of the EAARL data in an interactive or batch mode. ALPS also includes modules for the creation of bare earth, canopy-top, and submerged topography Digital Elevation Models (DEMs). The EAARL system uses an Earth-centered coordinate and reference system that removes the necessity to reference submerged topography data relative to water level or tide gages (Nayegandhi and others, 2006). The EAARL system can be mounted in an array of small twin-engine aircraft that operate at 300 meters above ground level (AGL) at a speed of 60 meters per second (117 knots). While other systems strive to maximize operational depth limits, EAARL has a narrow transmit beam and receiver field of view (1.5 to 2 milliradians), which improves the depth-measurement accuracy in shallow, clear water but limits the maximum depth to about 1.5 Secchi disk depth (~20 meters) in clear water. The laser transmitter [Continuum EPO-5000 yttrium aluminum garnet (YAG)] produces up to 5,000 short-duration (1.2 nanosecond), low-power (70 microjoules) pulses each second

  12. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  13. Mini gamma camera, camera system and method of use

    DOEpatents

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  14. The influence of the in situ camera calibration for direct georeferencing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Barrios, R.; Centeno, J.

    2014-11-01

    The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs

  15. Adaptation of the Camera Link Interface for Flight-Instrument Applications

    NASA Technical Reports Server (NTRS)

    Randall, David P.; Mahoney, John C.

    2010-01-01

    COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.

  16. Lightweight Electronic Camera for Research on Clouds

    NASA Technical Reports Server (NTRS)

    Lawson, Paul

    2006-01-01

    "Micro-CPI" (wherein "CPI" signifies "cloud-particle imager") is the name of a small, lightweight electronic camera that has been proposed for use in research on clouds. It would acquire and digitize high-resolution (3- m-pixel) images of ice particles and water drops at a rate up to 1,000 particles (and/or drops) per second.

  17. New television camera eliminates vidicon tube

    NASA Technical Reports Server (NTRS)

    1966-01-01

    Small, lightweight camera systems use solid state imaging devices in the form of phototransistor mosaic sensors instead of vidicon tubes for light sensing and image conversion. The digital logic circuits scan the sensor mosaic at 60 frames per second to produce pictures composed of a series of dots rather than lines.

  18. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  19. Novel fundus camera design

    NASA Astrophysics Data System (ADS)

    Dehoog, Edward A.

    A fundus camera a complex optical system that makes use of the principle of reflex free indirect ophthalmoscopy to image the retina. Despite being in existence as early as 1900's, little has changed in the design of a fundus camera and there is minimal information about the design principles utilized. Parameters and specifications involved in the design of fundus camera are determined and their affect on system performance are discussed. Fundus cameras incorporating different design methods are modeled and a performance evaluation based on design parameters is used to determine the effectiveness of each design strategy. By determining the design principles involved in the fundus camera, new cameras can be designed to include specific imaging modalities such as optical coherence tomography, imaging spectroscopy and imaging polarimetry to gather additional information about properties and structure of the retina. Design principles utilized to incorporate such modalities into fundus camera systems are discussed. Design, implementation and testing of a snapshot polarimeter fundus camera are demonstrated.

  20. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  1. Vacuum Camera Cooler

    NASA Technical Reports Server (NTRS)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  2. The ethics of using cameras in care homes.

    PubMed

    Fisk, Malcolm; Flórez-Revuelta, Francisco

    senior researcher, Digital Imaging Research There are concerns about how cameras in care homes might intrude on residents' and staff privacy but worries about resident abuse must be recognised. This article outlines an ethical way forward and calls for a rethink about cameras that focuses less on their ability to "see" and more on their use as data-gathering tools.

  3. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  4. Improved Airborne System for Sensing Wildfires

    NASA Technical Reports Server (NTRS)

    McKeown, Donald; Richardson, Michael

    2008-01-01

    The Wildfire Airborne Sensing Program (WASP) is engaged in a continuing effort to develop an improved airborne instrumentation system for sensing wildfires. The system could also be used for other aerial-imaging applications, including mapping and military surveillance. Unlike prior airborne fire-detection instrumentation systems, the WASP system would not be based on custom-made multispectral line scanners and associated custom- made complex optomechanical servomechanisms, sensors, readout circuitry, and packaging. Instead, the WASP system would be based on commercial off-the-shelf (COTS) equipment that would include (1) three or four electronic cameras (one for each of three or four wavelength bands) instead of a multispectral line scanner; (2) all associated drive and readout electronics; (3) a camera-pointing gimbal; (4) an inertial measurement unit (IMU) and a Global Positioning System (GPS) receiver for measuring the position, velocity, and orientation of the aircraft; and (5) a data-acquisition subsystem. It would be necessary to custom-develop an integrated sensor optical-bench assembly, a sensor-management subsystem, and software. The use of mostly COTS equipment is intended to reduce development time and cost, relative to those of prior systems.

  5. Vacuum compatible miniature CCD camera head

    DOEpatents

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  6. 2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  7. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  8. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  9. Tower Camera Handbook

    SciTech Connect

    Moudry, D

    2005-01-01

    The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for comparison with the albedo that can be calculated from downward-looking radiometers, as well as some indication of present weather. Similarly, during spring time, the camera images show the changes in the ground albedo as the snow melts. The tower images are saved in hourly intervals. In addition, two other cameras, the skydeck camera in Barrow and the piling camera in Atqasuk, show the current conditions at those sites.

  10. Traffic camera system development

    NASA Astrophysics Data System (ADS)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  11. Concept for an airborne real-time ISR system with multi-sensor 3D data acquisition

    NASA Astrophysics Data System (ADS)

    Haraké, Laura; Schilling, Hendrik; Blohm, Christian; Hillemann, Markus; Lenz, Andreas; Becker, Merlin; Keskin, Göksu; Middelmann, Wolfgang

    2016-10-01

    In modern aerial Intelligence, Surveillance and Reconnaissance operations, precise 3D information becomes inevitable for increased situation awareness. In particular, object geometries represented by texturized digital surface models constitute an alternative to a pure evaluation of radiometric measurements. Besides the 3D data's level of detail aspect, its availability is time-relevant in order to make quick decisions. Expanding the concept of our preceding remote sensing platform developed together with OHB System AG and Geosystems GmbH, in this paper we present an airborne multi-sensor system based on a motor glider equipped with two wing pods; one carries the sensors, whereas the second pod downlinks sensor data to a connected ground control station by using the Aerial Reconnaissance Data System of OHB. An uplink is created to receive remote commands from the manned mobile ground control station, which on its part processes and evaluates incoming sensor data. The system allows the integration of efficient image processing and machine learning algorithms. In this work, we introduce a near real-time approach for the acquisition of a texturized 3D data model with the help of an airborne laser scanner and four high-resolution multi-spectral (RGB, near-infrared) cameras. Image sequences from nadir and off-nadir cameras permit to generate dense point clouds and to texturize also facades of buildings. The ground control station distributes processed 3D data over a linked geoinformation system with web capabilities to off-site decision-makers. As the accurate acquisition of sensor data requires boresight calibrated sensors, we additionally examine the first steps of a camera calibration workflow.

  12. Measuring Positions of Objects using Two or More Cameras

    NASA Technical Reports Server (NTRS)

    Klinko, Steve; Lane, John; Nelson, Christopher

    2008-01-01

    An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate

  13. Sampling for Airborne Radioactivity

    DTIC Science & Technology

    2007-10-01

    compared to betas, gammas and neutrons. For an airborne radioactivity detection system, it is most important to be able to detect alpha particles and... Airborne radioactive particles may emit alpha, beta, gamma or neutron radiation, depending on which radioisotope is present. From a health perspective...

  14. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  15. Camera Calibration with Radial Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  16. The PS1 Gigapixel Camera

    NASA Astrophysics Data System (ADS)

    Tonry, John L.; Isani, S.; Onaka, P.

    2007-12-01

    The world's largest and most advanced digital camera has been installed on the Pan-STARRS-1 (PS1) telescope on Haleakala, Maui. Built at the University of Hawaii at Manoa's Institute for Astronomy (IfA) in Honolulu, the gigapixel camera will capture images that will be used to scan the skies for killer asteroids, and to create the most comprehensive catalog of stars and galaxies ever produced. The CCD sensors at the heart of the camera were developed in collaboration with Lincoln Laboratory of the Massachusetts Institute of Technology. The image area, which is about 40 cm across, contains 60 identical silicon chips, each of which contains 64 independent imaging circuits. Each of these imaging circuits contains approximately 600 x 600 pixels, for a total of about 1.4 gigapixels in the focal plane. The CCDs themselves employ the innovative technology called "orthogonal transfer." Splitting the image area into about 4,000 separate regions in this way has three advantages: data can be recorded more quickly, saturation of the image by a very bright star is confined to a small region, and any defects in the chips only affect only a small part of the image area. The CCD camera is controlled by an ultrafast 480-channel control system developed at the IfA. The individual CCD cells are grouped in 8 x 8 arrays on a single silicon chip called an orthogonal transfer array (OTA), which measures about 5 cm square. There are a total of 60 OTAs in the focal plane of each telescope.

  17. Calibration of Low Cost RGB and NIR Uav Cameras

    NASA Astrophysics Data System (ADS)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  18. Assessing the Photogrammetric Potential of Cameras in Portable Devices

    NASA Astrophysics Data System (ADS)

    Smith, M. J.; Kokkas, N.

    2012-07-01

    In recent years, there have been an increasing number of portable devices, tablets and Smartphone's employing high-resolution digital cameras to satisfy consumer demand. In most cases, these cameras are designed primarily for capturing visually pleasing images and the potential of using Smartphone and tablet cameras for metric applications remains uncertain. The compact nature of the host's devices leads to very small cameras and therefore smaller geometric characteristics. This also makes them extremely portable and with their integration into a multi-function device, which is part of the basic unit cost often makes them readily available. Many application specialists may find them an attractive proposition where some modest photogrammetric capability would be useful. This paper investigates the geometric potential of these cameras for close range photogrammetric applications by: • investigating their geometric characteristics using the self-calibration method of camera calibration and comparing results from a state-of-the-art Digital SLR camera. • investigating their capability for 3D building modelling. Again, these results will be compared with findings from results obtained from a Digital SLR camera. The early results presented show that the iPhone has greater potential for photogrammetric use than the iPad.

  19. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  20. Digital In, Digital Out: Digital Editing with Firewire.

    ERIC Educational Resources Information Center

    Doyle, Bob; Sauer, Jeff

    1997-01-01

    Reviews linear and nonlinear digital video (DV) editing equipment and software, using the IEEE 1394 (FireWire) connector. Includes a chart listing specifications and rating eight DV editing systems, reviews two DV still-photo cameras, and previews beta DV products. (PEN)

  1. Microchannel plate streak camera

    DOEpatents

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  2. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  3. Microchannel plate streak camera

    DOEpatents

    Wang, C.L.

    1984-09-28

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (uv to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 keV x-rays.

  4. Microchannel plate streak camera

    DOEpatents

    Wang, C.L.

    1989-03-21

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras is disclosed. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1,000 KeV x-rays. 3 figs.

  5. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  6. An integrated compact airborne multispectral imaging system using embedded computer

    NASA Astrophysics Data System (ADS)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  7. The Large Synoptic Survey Telescope (LSST) Camera

    ScienceCinema

    None

    2016-12-02

    Ranked as the top ground-based national priority for the field for the current decade, LSST is currently under construction in Chile. The U.S. Department of Energy’s SLAC National Accelerator Laboratory is leading the construction of the LSST camera – the largest digital camera ever built for astronomy. SLAC Professor Steven M. Kahn is the overall Director of the LSST project, and SLAC personnel are also participating in the data management. The National Science Foundation is the lead agency for construction of the LSST. Additional financial support comes from the Department of Energy and private funding raised by the LSST Corporation.

  8. The Large Synoptic Survey Telescope (LSST) Camera

    SciTech Connect

    2016-11-01

    Ranked as the top ground-based national priority for the field for the current decade, LSST is currently under construction in Chile. The U.S. Department of Energy’s SLAC National Accelerator Laboratory is leading the construction of the LSST camera – the largest digital camera ever built for astronomy. SLAC Professor Steven M. Kahn is the overall Director of the LSST project, and SLAC personnel are also participating in the data management. The National Science Foundation is the lead agency for construction of the LSST. Additional financial support comes from the Department of Energy and private funding raised by the LSST Corporation.

  9. Camera-enabled techniques for organic synthesis

    PubMed Central

    Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L

    2013-01-01

    Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820

  10. Analytical multicollimator camera calibration

    USGS Publications Warehouse

    Tayman, W.P.

    1978-01-01

    Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.

  11. Streak camera meeting summary

    SciTech Connect

    Dolan, Daniel H.; Bliss, David E.

    2014-09-01

    Streak cameras are important for high-speed data acquisition in single event experiments, where the total recorded information (I) is shared between the number of measurements (M) and the number of samples (S). Topics of this meeting included: streak camera use at the national laboratories; current streak camera production; new tube developments and alternative technologies; and future planning. Each topic is summarized in the following sections.

  12. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  13. LSST Camera Optics Design

    SciTech Connect

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  14. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  15. An evaluation of onshore digital elevation models for modelling tsunami inundation zones

    NASA Astrophysics Data System (ADS)

    Griffin, Jonathan; Latief, Hamzah; Kongko, Widjo; Harig, Sven; Horspool, Nick; Hanung, Raditya; Rojali, Aditia; Maher, Nicola; Fuchs, Annika; Hossen, Jakir; Upi, Supriyati; Edi, Dewanto; Rakowsky, Natalja; Cummins, Phil

    2015-06-01

    A sensitivity study is undertaken to assess the utility of different onshore digital elevation models (DEM) for simulating the extent of tsunami inundation using case studies from two locations in Indonesia. We compare airborne IFSAR, ASTER and SRTM against high resolution LiDAR and stereo-camera data in locations with different coastal morphologies. Tsunami inundation extents modelled with airborne IFSAR DEMs are comparable with those modelled with the higher resolution datasets and are also consistent with historical run-up data, where available. Large vertical errors and poor resolution of the coastline in the ASTER and SRTM elevation datasets cause the modelled inundation extent to be much less compared with the other datasets and observations. Therefore ASTER and SRTM should not be used to underpin tsunami inundation models. a model mesh resolution of 25 m was sufficient for estimating the inundated area when using elevation data with high vertical accuracy in the case studies presented here. Differences in modelled inundation between digital terrain models (DTM) and digital surface models (DSM) for LiDAR and IFSAR are greater than differences between the two data types. Models using DTM may overestimate inundation while those using DSM may underestimate inundation when a constant Manning’s roughness value is used. We recommend using DTM for modelling tsunami inundation extent with further work needed to resolve the scale at which surface roughness should be parameterised.

  16. Airborne gravity is here

    SciTech Connect

    Hammer, S.

    1982-01-11

    After 20 years of development efforts, the airborne gravity survey has finally become a practical exploration method. Besides gravity data, the airborne survey can also collect simultaneous, continuous records of high-precision magneticfield data as well as terrain clearance; these provide a topographic contour map useful in calculating terrain conditions and in subsequent planning and engineering. Compared with a seismic survey, the airborne gravity method can cover the same area much more quickly and cheaply; a seismograph could then detail the interesting spots.

  17. NIR-green-blue high-resolution digital images for assessement of winter cover crop biomass

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Many small unmanned aerial systems use true-color digital cameras for remote sensing. For some cameras, only the red channel is sensitive to near-infrared (NIR) light; we attached a custom red-blocking filter to a digital camera to obtain NIR-green-blue digital images. One advantage of this low-co...

  18. Smart Camera Technology Increases Quality

    NASA Technical Reports Server (NTRS)

    2004-01-01

    When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.

  19. CCD Luminescence Camera

    NASA Technical Reports Server (NTRS)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  20. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  1. Compact Solar Camera.

    ERIC Educational Resources Information Center

    Juergens, Albert

    1980-01-01

    Describes a compact solar camera built as a one-semester student project. This camera is used for taking pictures of the sun and moon and for direct observation of the image of the sun on a screen. (Author/HM)

  2. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  3. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  4. A Detailed Examination of DTM Source Data: Light, Camera, Action

    NASA Astrophysics Data System (ADS)

    Mosbrucker, A. R.; Spicer, K.; Major, J. J.; Pitlick, J.; Normandeau, J.

    2013-12-01

    High-resolution, multi-temporal, remote sensing technologies have revolutionized geomorphic analysis. Topographic point measurements (XYZ) acquired from airborne and terrestrial laser scanning (ALS and TLS) and photogrammetry commonly are used to generate 3D digital terrain models (DTMs). Here, we compare DTMs generated using Structure-from-Motion (SfM) photogrammetry to ALS, TLS, and classic photogrammetry. Our investigation utilized 5 years of remotely sensed topographic data, from ALS (2007, 2009), TLS (2010-2012), and airborne and terrestrial close-range oblique photographs (using both classic and SfM photogrammetry) (2010-2012), of a 70,000 m2, 500 m-long reach of the upper North Fork Toutle River, Washington, devastated by the cataclysmic 1980 eruption of Mount St. Helens. The study reach is sparsely vegetated and features 10-30 m-tall vertical banks separated by a 170 m-wide floodplain. In addition to remotely sensed data, we surveyed more than 300 ground control points (GCPs) using a 1-arc second reflectorless total station and map- and survey-grade GPS and RTK-GNSS. Few, if any, data sets have been obtained with this variety of technologies in spatial and temporal coincidence. We examine the application of each technique to assess fluvial morphological change, as computed by DTM differencing. A subset of GCPs was used to transform image coordinates into geodetic datum. DTM uncertainty was then quantified using the remaining GCPs. This uncertainty was used to determine the minimum level of detectable change. Owing to highly variable topography and point-to-surface interpolation techniques, method strengths and weaknesses were identified. ALS data were found to have greatest uncertainty in areas of low point density on steep slopes. TLS produced highly variable point density in the floodplain, where interpolation error is likely to be minimal. In contrast, classic and SfM photogrammetry using oblique photographs with a high degree of image overlap produced

  5. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  6. 3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  7. Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2001-01-01

    The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.

  8. Optical Communications Link to Airborne Transceiver

    NASA Technical Reports Server (NTRS)

    Regehr, Martin W.; Kovalik, Joseph M.; Biswas, Abhijit

    2011-01-01

    An optical link from Earth to an aircraft demonstrates the ability to establish a link from a ground platform to a transceiver moving overhead. An airplane has a challenging disturbance environment including airframe vibrations and occasional abrupt changes in attitude during flight. These disturbances make it difficult to maintain pointing lock in an optical transceiver in an airplane. Acquisition can also be challenging. In the case of the aircraft link, the ground station initially has no precise knowledge of the aircraft s location. An airborne pointing system has been designed, built, and demonstrated using direct-drive brushless DC motors for passive isolation of pointing disturbances and for high-bandwidth control feedback. The airborne transceiver uses a GPS-INS system to determine the aircraft s position and attitude, and to then illuminate the ground station initially for acquisition. The ground transceiver participates in link-pointing acquisition by first using a wide-field camera to detect initial illumination from the airborne beacon, and to perform coarse pointing. It then transfers control to a high-precision pointing detector. Using this scheme, live video was successfully streamed from the ground to the aircraft at 270 Mb/s while simultaneously downlinking a 50 kb/s data stream from the aircraft to the ground.

  9. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  10. The Mars observer camera

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Soulanille, T.; Ravine, M.

    1987-01-01

    A camera designed to operate under the extreme constraints of the Mars Observer Mission was selected by NASA in April, 1986. Contingent upon final confirmation in mid-November, the Mars Observer Camera (MOC) will begin acquiring images of the surface and atmosphere of Mars in September-October 1991. The MOC incorporates both a wide angle system for low resolution global monitoring and intermediate resolution regional targeting, and a narrow angle system for high resolution selective surveys. Camera electronics provide control of image clocking and on-board, internal editing and buffering to match whatever spacecraft data system capabilities are allocated to the experiment. The objectives of the MOC experiment follow.

  11. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  12. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  13. Do speed cameras reduce collisions?

    PubMed

    Skubic, Jeffrey; Johnson, Steven B; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods - before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not independently affect the incidence of motor vehicle collisions.

  14. Airborne and Ground-Based Platforms for Data Collection in Small Vineyards: Examples from the UK and Switzerland

    NASA Astrophysics Data System (ADS)

    Green, David R.; Gómez, Cristina; Fahrentrapp, Johannes

    2015-04-01

    This paper presents an overview of some of the low-cost ground and airborne platforms and technologies now becoming available for data collection in small area vineyards. Low-cost UAV or UAS platforms and cameras are now widely available as the means to collect both vertical and oblique aerial still photography and airborne videography in vineyards. Examples of small aerial platforms include the AR Parrot Drone, the DJI Phantom (1 and 2), and 3D Robotics IRIS+. Both fixed-wing and rotary wings platforms offer numerous advantages for aerial image acquisition including the freedom to obtain high resolution imagery at any time required. Imagery captured can be stored on mobile devices such as an Apple iPad and shared, written directly to a memory stick or card, or saved to the Cloud. The imagery can either be visually interpreted or subjected to semi-automated analysis using digital image processing (DIP) software to extract information about vine status or the vineyard environment. At the ground-level, a radio-controlled 'rugged' model 4x4 vehicle can also be used as a mobile platform to carry a number of sensors (e.g. a Go-Pro camera) around a vineyard, thereby facilitating quick and easy field data collection from both within the vine canopy and rows. For the small vineyard owner/manager with limited financial resources, this technology has a number of distinct advantages to aid in vineyard management practices: it is relatively cheap to purchase; requires a short learning-curve to use and to master; can make use of autonomous ground control units for repetitive coverage enabling reliable monitoring; and information can easily be analysed and integrated within a GIS with minimal expertise. In addition, these platforms make widespread use of familiar and everyday, off-the-shelf technologies such as WiFi, Go-Pro cameras, Cloud computing, and smartphones or tablets as the control interface, all with a large and well established end-user support base. Whilst there are

  15. Comparisons of Simultaneously Acquired Airborne Sfm Photogrammetry and Lidar

    NASA Astrophysics Data System (ADS)

    Larsen, C. F.

    2014-12-01

    Digital elevation models (DEMs) created using images from a consumer DSLR camera are compared against simultaneously acquired LiDAR on a number of airborne mapping projects across Alaska, California and Utah. The aircraft used is a Cessna 180, and is equipped with the University of Alaska Geophysical Institute (UAF-GI) scanning airborne LiDAR system. This LiDAR is the same as described in Johnson et al, 2013, and is the principal instrument used for NASA's Operation IceBridge flights in Alaska. The system has been in extensive use since 2009, and is particularly well characterized with dozens of calibration flights and a careful program of boresight angle determination and monitoring. The UAF-GI LiDAR has a precision of +/- 8 cm and accuracy of +/- 15 cm. The photogrammetry DEM simultaneously acquired with the LiDAR relies on precise shutter timing using an event marker input to the IMU associated with the LiDAR system. The photo positions are derived from the fully coupled GPS/IMU processing, which samples at 100 Hz and is able to directly calculate the antenna to image plane offset displacements from the full orientation data. This use of the GPS/IMU solution means that both the LiDAR and Cessna 180 photogrammetry DEM share trajectory input data, however no orientation data nor ground control is used for the photorammetry processing. The photogrammetry DEMs are overlaid on the LiDAR point cloud and analyzed for horizontal shifts or warps relative to the LiDAR. No warping or horizontal shifts have been detectable for a number of photogrammetry DEMs. Vertical offsets range from +/- 30 cm, with a typical standard deviation about that mean of 10 cm or better. LiDAR and photogrammetry function inherently differently over trees and brush, and direct comparisons between the two methods show much larger differences over vegetated areas. Finally, the differences in flight patterns associated with the two methods will be discussed, highlighting the photogrammetry

  16. Advanced CCD camera developments

    SciTech Connect

    Condor, A.

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  17. The MKID Camera

    NASA Astrophysics Data System (ADS)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  18. The Complementary Pinhole Camera.

    ERIC Educational Resources Information Center

    Bissonnette, D.; And Others

    1991-01-01

    Presents an experiment based on the principles of rectilinear motion of light operating in a pinhole camera that projects the image of an illuminated object through a small hole in a sheet to an image screen. (MDH)

  19. Airborne Next: Rethinking Airborne Organization and Applying New Concepts

    DTIC Science & Technology

    2015-06-01

    structures since its employment on a large scale during World War II. It is puzzling to consider how little airborne organizational structures and employment...future potential of airborne concepts by rethinking traditional airborne organizational structures and employment concepts. Using a holistic approach in... structures of airborne forces to model a “small and many” approach over a “large and few” approach, while incorporating a “swarming” concept. Utilizing

  20. Determination of the spatial structure of vegetation on the repository of the mine "Fryderyk" in Tarnowskie Góry, based on airborne laser scanning from the ISOK project and digital orthophotomaps

    NASA Astrophysics Data System (ADS)

    Szostak, Marta; Wężyk, Piotr; Pająk, Marek; Haryło, Paweł; Lisańczuk, Marek

    2015-06-01

    The purpose of this study was to determine the spatial structure of vegetation on the repository of the mine "Fryderyk" in Tarnowskie Góry. Tested area was located in the Upper Silesian Industrial Region (a large industrial region in Poland). It was a unique refuge habitat - Natura2000; PLH240008. The main aspect of this elaboration was to investigate the possible use of geotechniques and generally available geodata for mapping LULC changes and determining the spatial structure of vegetation. The presented study focuses on the analysis of a spatial structure of vegetation in the research area. This exploration was based on aerial images and orthophotomaps from 1947, 1998, 2003, 2009, 2011 and airborne laser scanning data (2011, ISOK project). Forest succession changes which occurred between 1947 and 2011 were analysed. The selected features of vegetation overgrowing spoil heap "Fryderyk" was determined. The results demonstrated a gradual succession of greenery on soil heap. In 1947, 84% of this area was covered by low vegetation. Tree expansion was proceeding in the westerly and northwest direction. In 2011 this canopy layer covered almost 50% of the research area. Parameters such as height of vegetation, crowns length and cover density were calculated by an airborne laser scanning data. These analyses indicated significant diversity in vertical and horizontal structures of vegetation. The study presents some capacities to use airborne laser scanning for an impartial evaluation of the structure of vegetation.

  1. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  2. Neutron cameras for ITER

    SciTech Connect

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-12-31

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from {sup 16}N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with {sup 16}N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins.

  3. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  4. Optical design of camera optics for mobile phones

    NASA Astrophysics Data System (ADS)

    Steinich, Thomas; Blahnik, Vladan

    2012-03-01

    At present, compact camera modules are included in many mobile electronic devices such as mobile phones, personal digital assistants or tablet computers. They have various uses, from snapshots of everyday situations to capturing barcodes for product information. This paper presents an overview of the key design challenges and some typical solutions. A lens design for a mobile phone camera is compared to a downscaled 35 mm format lens to demonstrate the main differences in optical design. Particular attention is given to scaling effects.

  5. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  6. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  7. Generalized phase-shifting color digital holography

    NASA Astrophysics Data System (ADS)

    Nomura, Takanori; Kawakami, Takaaki; Shinomura, Kazuma

    2016-06-01

    Two methods to apply the generalized phase-shifting digital holography to color digital holography are proposed. One is wave-splitting generalized phase-shifting color digital holography. This is realized by using a color Bayer camera. Another is multiple exposure generalized phase-shifting color digital holography. This is realized by the wavelength-dependent phase-shifting devices. Experimental results for both generalized phase-shifting color digital holography are presented to confirm the proposed methods.

  8. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  9. Digital field ion microscopy

    SciTech Connect

    Sijbrandij, S.J.; Russell, K.F.; Miller, M.K.; Thomson, R.C.

    1998-01-01

    Due to environmental concerns, there is a trend to avoid the use of chemicals needed to develop negatives and to process photographic paper, and to use digital technologies instead. Digital technology also offers the advantages that it is convenient, as it enables quick access to the end result, allows image storage and processing on computer, allows rapid hard copy output, and simplifies electronic publishing. Recently significant improvements have been made to the performance and cost of camera-sensors and printers. In this paper, field ion images recorded with two digital cameras of different resolution are compared to images recorded on standard 35 mm negative film. It should be noted that field ion images exhibit low light intensity and high contrast. Field ion images were recorded from a standard microchannel plate and a phosphor screen and had acceptance angles of {approximately} 60{degree}. Digital recordings were made with a Digital Vision Technologies (DVT) MICAM VHR1000 camera with a resolution of 752 x 582 pixels, and a Kodak DCS 460 digital camera with a resolution of 3,060 x 2,036 pixels. Film based recordings were made with Kodak T-MAX film rated at 400 ASA. The resolving power of T-MAX film, as specified by Kodak, is between 50 and 125 lines per mm, which corresponds to between 1,778 x 1,181 and 4,445 x 2,953 pixels, i.e. similar to that from the DCS 460 camera. The intensities of the images were sufficient to be recorded with standard fl:1.2 lenses with exposure times of less than 2 s. Many digital cameras were excluded from these experiments due to their lack of sensitivity or the inability to record a full frame image due to the fixed working distance defined by the vacuum system. The digital images were output on a Kodak Digital Science 8650 PS dye sublimation color printer (300 dpi). All field ion micrographs presented were obtained from a Ni-Al-Be specimen.

  10. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  11. Video-Based Point Cloud Generation Using Multiple Action Cameras

    NASA Astrophysics Data System (ADS)

    Teo, T.

    2015-05-01

    Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1) camera calibration, (2) video conversion and alignment, (3) orientation modelling, (4) dense matching, and (5) evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM) technique is utilized to obtain the image orientations. Then, semi-global matching (SGM) algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  12. Light microscopy digital imaging.

    PubMed

    Joubert, James; Sharma, Deepak

    2011-10-01

    This unit presents an overview of digital imaging hardware used in light microscopy. CMOS, CCD, and EMCCDs are the primary sensors used. The strengths and weaknesses of each define the primary applications for these sensors. Sensor architecture and formats are also reviewed. Color camera design strategies and sensor window cleaning are also described in the unit.

  13. The Digital Divide

    ERIC Educational Resources Information Center

    Hudson, Hannah Trierweiler

    2011-01-01

    Megan is a 14-year-old from Nebraska who just started ninth grade. She has her own digital camera, cell phone, Nintendo DS, and laptop, and one or more of these devices is usually by her side. Compared to the interactions and exploration she's engaged in at home, Megan finds the technology in her classroom falls a little flat. Most of the…

  14. Application of Optical Measurement Techniques During Stages of Pregnancy: Use of Phantom High Speed Cameras for Digital Image Correlation (D.I.C.) During Baby Kicking and Abdomen Movements

    NASA Technical Reports Server (NTRS)

    Gradl, Paul

    2016-01-01

    Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).

  15. Scientific Objectives of Small Carry-on Impactor (SCI) and Deployable Camera 3 Digital (DCAM3-D): Observation of an Ejecta Curtain and a Crater Formed on the Surface of Ryugu by an Artificial High-Velocity Impact

    NASA Astrophysics Data System (ADS)

    Arakawa, M.; Wada, K.; Saiki, T.; Kadono, T.; Takagi, Y.; Shirai, K.; Okamoto, C.; Yano, H.; Hayakawa, M.; Nakazawa, S.; Hirata, N.; Kobayashi, M.; Michel, P.; Jutzi, M.; Imamura, H.; Ogawa, K.; Sakatani, N.; Iijima, Y.; Honda, R.; Ishibashi, K.; Hayakawa, H.; Sawada, H.

    2016-10-01

    The Small Carry-on Impactor (SCI) equipped on Hayabusa2 was developed to produce an artificial impact crater on the primitive Near-Earth Asteroid (NEA) 162173 Ryugu (Ryugu) in order to explore the asteroid subsurface material unaffected by space weathering and thermal alteration by solar radiation. An exposed fresh surface by the impactor and/or the ejecta deposit excavated from the crater will be observed by remote sensing instruments, and a subsurface fresh sample of the asteroid will be collected there. The SCI impact experiment will be observed by a Deployable CAMera 3-D (DCAM3-D) at a distance of ˜1 km from the impact point, and the time evolution of the ejecta curtain will be observed by this camera to confirm the impact point on the asteroid surface. As a result of the observation of the ejecta curtain by DCAM3-D and the crater morphology by onboard cameras, the subsurface structure and the physical properties of the constituting materials will be derived from crater scaling laws. Moreover, the SCI experiment on Ryugu gives us a precious opportunity to clarify effects of microgravity on the cratering process and to validate numerical simulations and models of the cratering process.

  16. Identification and extraction of the seaward edge of terrestrial vegetation using digital aerial photography

    USGS Publications Warehouse

    Harris, Melanie; Brock, John C.; Nayegandhi, A.; Duffy, M.; Wright, C.W.

    2006-01-01

    This report is created as part of the Aerial Data Collection and Creation of Products for Park Vital Signs Monitoring within the Northeast Region Coastal and Barrier Network project, which is a joint project between the National Park Service Inventory and Monitoring Program (NPS-IM), the National Aeronautics and Space Administration (NASA) Observational Sciences Branch, and the U.S. Geological Survey (USGS) Center for Coastal and Watershed Studies (CCWS). This report is one of a series that discusses methods for extracting topographic features from aerial survey data. It details step-by-step methods used to extract a spatially referenced digital line from aerial photography that represents the seaward edge of terrestrial vegetation along the coast of Assateague Island National Seashore (ASIS). One component of the NPS-IM/USGS/NASA project includes the collection of NASA aerial surveys over various NPS barrier islands and coastal parks throughout the National Park Service's Northeast Region. These aerial surveys consist of collecting optical remote sensing data from a variety of sensors, including the NASA Airborne Topographic Mapper (ATM), the NASA Experimental Advanced Airborne Research Lidar (EAARL), and down-looking digital mapping cameras.

  17. The VISTA IR camera

    NASA Astrophysics Data System (ADS)

    Dalton, Gavin B.; Caldwell, Martin; Ward, Kim; Whalley, Martin S.; Burke, Kevin; Lucas, John M.; Richards, Tony; Ferlet, Marc; Edeson, Ruben L.; Tye, Daniel; Shaughnessy, Bryan M.; Strachan, Mel; Atad-Ettedgui, Eli; Leclerc, Melanie R.; Gallie, Angus; Bezawada, Nagaraja N.; Clark, Paul; Bissonauth, Nirmal; Luke, Peter; Dipper, Nigel A.; Berry, Paul; Sutherland, Will; Emerson, Jim

    2004-09-01

    The VISTA IR Camera has now completed its detailed design phase and is on schedule for delivery to ESO"s Cerro Paranal Observatory in 2006. The camera consists of 16 Raytheon VIRGO 2048x2048 HgCdTe arrays in a sparse focal plane sampling a 1.65 degree field of view. A 1.4m diameter filter wheel provides slots for 7 distinct science filters, each comprising 16 individual filter panes. The camera also provides autoguiding and curvature sensing information for the VISTA telescope, and relies on tight tolerancing to meet the demanding requirements of the f/1 telescope design. The VISTA IR camera is unusual in that it contains no cold pupil-stop, but rather relies on a series of nested cold baffles to constrain the light reaching the focal plane to the science beam. In this paper we present a complete overview of the status of the final IR Camera design, its interaction with the VISTA telescope, and a summary of the predicted performance of the system.

  18. THE DARK ENERGY CAMERA

    SciTech Connect

    Flaugher, B.; Diehl, H. T.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Buckley-Geer, E. J.; Honscheid, K.; Abbott, T. M. C.; Bonati, M.; Antonik, M.; Brooks, D.; Ballester, O.; Cardiel-Sas, L.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Boprie, D.; Campa, J.; Castander, F. J.; Collaboration: DES Collaboration; and others

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  19. The Dark Energy Camera

    SciTech Connect

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  20. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  1. The Dark Energy Camera

    DOE PAGES

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar.more » The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less

  2. Neutron counting with cameras

    SciTech Connect

    Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo

    2015-07-01

    A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involved are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)

  3. Digital Photography and Its Impact on Instruction.

    ERIC Educational Resources Information Center

    Lantz, Chris

    Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…

  4. Telemedicine screening of diabetic retinopathy using a hand-held fundus camera.

    PubMed

    Yogesan, K; Constable, I J; Barry, C J; Eikelboom, R H; McAllister, I L; Tay-Kearney, M L

    2000-01-01

    The objective was to evaluate digital images of the retina from a handheld fundus camera (Nidek NM-100) for suitability in telemedicine screening of diabetic retinopathy. A handheld fundus camera (Nidek) and a standard fundus camera (Zeiss) were used to photograph 49 eyes from 25 consecutive patients attending our diabetic clinic. One patient had cataracts, making it impossible to get a quality image of one of the eyes (retina). The Nidek images were digitized, compressed, and stored in a Fujix DF-10M digitizer supplied with the camera. The digital images and the photographs were presented separately in a random order to three ophthalmologists. The quality of the images was ranked as good, acceptable or unacceptable for diabetic retinopathy diagnosis. The images were also evaluated for the presence of microaneurysms, blot hemorrhages, exudates, fibrous tissue, previous photocoagulation, and new vessel formation. kappa Values were computed for agreement between the photographs and digital images. Overall agreement between the photographs and digital images was poor (kappa < 0.30). On average, only 24% of the digital images were graded as being good quality and 56% as having an acceptable quality. However, 93% of the photographs were graded as good-quality images for diagnosis. The results indicate that the digital images from the handheld fundus camera may not be suitable for diagnosis of diabetic retinopathy. The images shown on the liquid crystal display (LCD) screen of the camera were of good quality. However, the images produced by the digitizer (Fujix DF-10M) attached to the camera were not as good as the images shown on the LCD screen. A better digitizing system may produce better quality images from the Nidek camera.

  5. International Symposium on Airborne Geophysics

    NASA Astrophysics Data System (ADS)

    Mogi, Toru; Ito, Hisatoshi; Kaieda, Hideshi; Kusunoki, Kenichiro; Saltus, Richard W.; Fitterman, David V.; Okuma, Shigeo; Nakatsuka, Tadashi

    2006-05-01

    Airborne geophysics can be defined as the measurement of Earth properties from sensors in the sky. The airborne measurement platform is usually a traditional fixed-wing airplane or helicopter, but could also include lighter-than-air craft, unmanned drones, or other specialty craft. The earliest history of airborne geophysics includes kite and hot-air balloon experiments. However, modern airborne geophysics dates from the mid-1940s when military submarine-hunting magnetometers were first used to map variations in the Earth's magnetic field. The current gamut of airborne geophysical techniques spans a broad range, including potential fields (both gravity and magnetics), electromagnetics (EM), radiometrics, spectral imaging, and thermal imaging.

  6. Airborne Remote Sensing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA imaging technology has provided the basis for a commercial agricultural reconnaissance service. AG-RECON furnishes information from airborne sensors, aerial photographs and satellite and ground databases to farmers, foresters, geologists, etc. This service produces color "maps" of Earth conditions, which enable clients to detect crop color changes or temperature changes that may indicate fire damage or pest stress problems.

  7. Recognizing Airborne Hazards.

    ERIC Educational Resources Information Center

    Schneider, Christian M.

    1990-01-01

    The heating, ventilating, and air conditioning (HVAC) systems in older buildings often do not adequately handle air-borne contaminants. Outlines a three-stage Indoor Air Quality (IAQ) assessment and describes a case in point at a Pittsburgh, Pennsylvania, school. (MLF)

  8. Airborne asbestos in buildings.

    PubMed

    Lee, R J; Van Orden, D R

    2008-03-01

    The concentration of airborne asbestos in buildings nationwide is reported in this study. A total of 3978 indoor samples from 752 buildings, representing nearly 32 man-years of sampling, have been analyzed by transmission electron microscopy. The buildings that were surveyed were the subject of litigation related to suits alleging the general building occupants were exposed to a potential health hazard as a result the presence of asbestos-containing materials (ACM). The average concentration of all airborne asbestos structures was 0.01structures/ml (s/ml) and the average concentration of airborne asbestos > or = 5microm long was 0.00012fibers/ml (f/ml). For all samples, 99.9% of the samples were <0.01 f/ml for fibers longer than 5microm; no building averaged above 0.004f/ml for fibers longer than 5microm. No asbestos was detected in 27% of the buildings and in 90% of the buildings no asbestos was detected that would have been seen optically (> or = 5microm long and > or = 0.25microm wide). Background outdoor concentrations have been reported at 0.0003f/ml > or = 5microm. These results indicate that in-place ACM does not result in elevated airborne asbestos in building atmospheres approaching regulatory levels and that it does not result in a significantly increased risk to building occupants.

  9. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  10. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  11. A Coordinated Ice-based and Airborne Snow and Ice Thickness Measurement Campaign on Arctic Sea Ice

    NASA Astrophysics Data System (ADS)

    Richter-Menge, J.; Farrell, S.; Elder, B. C.; Gardner, J. M.; Brozena, J. M.

    2011-12-01

    A rare opportunity presented itself in March 2011 when the Naval Research Laboratory (NRL) and NASA IceBridge teamed with scientists from the U.S. Army Corps of Engineers Cold Regions Research and Engineering Laboratory (CRREL) to coordinate a multi-scale approach to mapping snow depth and sea ice thickness distribution in the Arctic. Ground-truth information for calibration/validation of airborne and CryoSat-2 satellite data were collected near a manned camp deployed in support of the US Navy's Ice Expedition 2011 (ICEX 2011). The ice camp was established at a location approximately 230 km north of Prudhoe Bay, Alaska, at the edge of the perennial ice zone. The suite of measurements was strategically organized around a 9-km-long survey line that covered a wide range of ice types, including refrozen leads, deformed and undeformed first year ice, and multiyear ice. A highly concentrated set of in situ measurements of snow depth and ice thickness were taken along the survey line. Once the survey line was in place, NASA IceBridge flew a dedicated mission along the survey line, collecting data with an instrument suite that included the Airborne Topographic Mapper (ATM), a high precision, airborne scanning laser altimeter; the Digital Mapping System (DMS), nadir-viewing digital camera; and the University of Kansas ultra-wideband Frequency Modulated Continuous Wave (FMCW) snow radar. NRL also flew a dedicated mission over the survey line with complementary airborne radar, laser and photogrammetric sensors (see Brozena et al., this session). These measurements were further leveraged by a series of CryoSat-2 under flights made in the region by the instrumented NRL and NASA planes, as well as US Navy submarine underpasses of the 9-km-long survey line to collect ice draft measurements. This comprehensive suite of data provides the full spectrum of sampling resolutions from satellite, to airborne, to ground-based, to submarine and will allow for a careful determination of

  12. Photoreactivation in Airborne Mycobacterium parafortuitum

    PubMed Central

    Peccia, Jordan; Hernandez, Mark

    2001-01-01

    Photoreactivation was observed in airborne Mycobacterium parafortuitum exposed concurrently to UV radiation (254 nm) and visible light. Photoreactivation rates of airborne cells increased with increasing relative humidity (RH) and decreased with increasing UV dose. Under a constant UV dose with visible light absent, the UV inactivation rate of airborne M. parafortuitum cells decreased by a factor of 4 as RH increased from 40 to 95%; however, under identical conditions with visible light present, the UV inactivation rate of airborne cells decreased only by a factor of 2. When irradiated in the absence of visible light, cellular cyclobutane thymine dimer content of UV-irradiated airborne M. parafortuitum and Serratia marcescens increased in response to RH increases. Results suggest that, unlike in waterborne bacteria, cyclobutane thymine dimers are not the most significant form of UV-induced DNA damage incurred by airborne bacteria and that the distribution of DNA photoproducts incorporated into UV-irradiated airborne cells is a function of RH. PMID:11526027

  13. Artificial human vision camera

    NASA Astrophysics Data System (ADS)

    Goudou, J.-F.; Maggio, S.; Fagno, M.

    2014-10-01

    In this paper we present a real-time vision system modeling the human vision system. Our purpose is to inspire from human vision bio-mechanics to improve robotic capabilities for tasks such as objects detection and tracking. This work describes first the bio-mechanical discrepancies between human vision and classic cameras and the retinal processing stage that takes place in the eye, before the optic nerve. The second part describes our implementation of these principles on a 3-camera optical, mechanical and software model of the human eyes and associated bio-inspired attention model.

  14. Laser Range Camera Modeling

    SciTech Connect

    Storjohann, K.

    1990-01-01

    This paper describes an imaging model that was derived for use with a laser range camera (LRC) developed by the Advanced Intelligent Machines Division of Odetics. However, this model could be applied to any comparable imaging system. Both the derivation of the model and the determination of the LRC's intrinsic parameters are explained. For the purpose of evaluating the LRC's extrinsic parameters, i.e., its external orientation, a transformation of the LRC's imaging model into a standard camera's (SC) pinhole model is derived. By virtue of this transformation, the evaluation of the LRC's external orientation can be found by applying any SC calibration technique.

  15. Lights, Camera, Learning!

    ERIC Educational Resources Information Center

    Bull, Glen; Bell, Lynn

    2009-01-01

    The shift from analog to digital video transformed the system from a unidirectional analog broadcast to a two-way conversation, resulting in the birth of participatory media. Digital video offers new opportunities for teaching science, social studies, mathematics, and English language arts. The professional education associations for each content…

  16. Small, low power analog-to-digital converter

    NASA Technical Reports Server (NTRS)

    Dunn, R. D.; Fullerton, D. H.

    1968-01-01

    A small, low-power, high-speed, 8-bit analog-to-digital converter using silicon chip integrated circuits is suitable for use in airborne test data systems. The successive approximation method of analog-to-digital conversion is used to generate the digital output.

  17. Electronic imaging system incorporating a hand-held fundus camera for canine ophthalmology.

    PubMed

    Hoang, H D; Brant, L M; Jaksetic, M D; Lake, S G; Stuart, B P

    2001-11-01

    An electronic imaging system incorporating a hand-held fundus camera was used to collect images of the canine ocular fundus. The electronic imaging system comprised a hand-held fundus camera, an IBM personal computer (PC 350), Microsoft Windows NT 4.0, Adobe Photoshop, and a color printer (Tektronix Phaser 550) and was used to store, edit, and print the images captured by the fundus camera. Hand-held fundus cameras are essential for use in canine ophthalmology. The Nidek NM-100 hand-held fundus camera digitalizes images, enabling their direct transfer into reports and their storage on writeable CDs.

  18. Pantir - a Dual Camera Setup for Precise Georeferencing and Mosaicing of Thermal Aerial Images

    NASA Astrophysics Data System (ADS)

    Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.

    2015-03-01

    Research and monitoring in fields like hydrology and agriculture are applications of airborne thermal infrared (TIR) cameras, which suffer from low spatial resolution and low quality lenses. Common ground control points (GCPs), lacking thermal activity and being relatively small in size, cannot be used in TIR images. Precise georeferencing and mosaicing however is necessary for data analysis. Adding a high resolution visible light camera (VIS) with a high quality lens very close to the TIR camera, in the same stabilized rig, allows us to do accurate geoprocessing with standard GCPs after fusing both images (VIS+TIR) using standard image registration methods.

  19. ASTRI SST-2M camera electronics

    NASA Astrophysics Data System (ADS)

    Sottile, G.; Catalano, O.; La Rosa, G.; Capalbi, M.; Gargano, C.; Giarrusso, S.; Impiombato, D.; Russo, F.; Sangiorgi, P.; Segreto, A.; Bonanno, G.; Garozzo, S.; Marano, D.; Romeo, G.; Scuderi, S.; Stringhetti, L.; Canestrari, R.; Gimenes, R.

    2016-07-01

    ASTRI SST-2M is an Imaging Atmospheric Cherenkov Telescope (IACT) developed by the Italian National Institute of Astrophysics, INAF. It is the prototype of the ASTRI telescopes proposed to be installed at the southern site of the Cherenkov Telescope Array, CTA. The optical system of the ASTRI telescopes is based on a dual mirror configuration, an innovative solution for IACTs, and the focal plane of the camera is composed of silicon photo-multipliers (SiPM), a recently developed technology for light detection, that exhibit very fast response and an excellent single photoelectron resolution. The ASTRI camera electronics is specifically designed to directly interface the SiPM sensors, detecting the fast pulses produced by the Cherenkov flashes, managing the trigger generation, the digital conversion of the signals and the transmission of the data to an external camera server connected through a LAN. In this contribution we present the general architecture of the camera electronics developed for the ASTRI SST-2M prototype, with special emphasis to some innovative solutions.

  20. Autofocus method for scanning remote sensing cameras.

    PubMed

    Lv, Hengyi; Han, Chengshan; Xue, Xucheng; Hu, Changhong; Yao, Cheng

    2015-07-10

    Autofocus methods are conventionally based on capturing the same scene from a series of positions of the focal plane. As a result, it has been difficult to apply this technique to scanning remote sensing cameras where the scenes change continuously. In order to realize autofocus in scanning remote sensing cameras, a novel autofocus method is investigated in this paper. Instead of introducing additional mechanisms or optics, the overlapped pixels of the adjacent CCD sensors on the focal plane are employed. Two images, corresponding to the same scene on the ground, can be captured at different times. Further, one step of focusing is done during the time interval, so that the two images can be obtained at different focal plane positions. Subsequently, the direction of the next step of focusing is calculated based on the two images. The analysis shows that the method investigated operates without restriction of the time consumption of the algorithm and realizes a total projection for general focus measures and algorithms from digital still cameras to scanning remote sensing cameras. The experiment results show that the proposed method is applicable to the entire focus measure family, and the error ratio is, on average, no more than 0.2% and drops to 0% by reliability improvement, which is lower than that of prevalent approaches (12%). The proposed method is demonstrated to be effective and has potential in other scanning imaging applications.

  1. Underwater camera with depth measurement

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  2. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  3. Make a Pinhole Camera

    ERIC Educational Resources Information Center

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  4. Snapshot polarimeter fundus camera.

    PubMed

    DeHoog, Edward; Luo, Haitao; Oka, Kazuhiko; Dereniak, Eustace; Schwiegerling, James

    2009-03-20

    A snapshot imaging polarimeter utilizing Savart plates is integrated into a fundus camera for retinal imaging. Acquired retinal images can be processed to reconstruct Stokes vector images, giving insight into the polarization properties of the retina. Results for images from a normal healthy retina and retinas with pathology are examined and compared.

  5. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  6. Spas color camera

    NASA Technical Reports Server (NTRS)

    Toffales, C.

    1983-01-01

    The procedures to be followed in assessing the performance of the MOS color camera are defined. Aspects considered include: horizontal and vertical resolution; value of the video signal; gray scale rendition; environmental (vibration and temperature) tests; signal to noise ratios; and white balance correction.

  7. Advanced Virgo phase cameras

    NASA Astrophysics Data System (ADS)

    van der Schaaf, L.; Agatsuma, K.; van Beuzekom, M.; Gebyehu, M.; van den Brand, J.

    2016-05-01

    A century after the prediction of gravitational waves, detectors have reached the sensitivity needed to proof their existence. One of them, the Virgo interferometer in Pisa, is presently being upgraded to Advanced Virgo (AdV) and will come into operation in 2016. The power stored in the interferometer arms raises from 20 to 700 kW. This increase is expected to introduce higher order modes in the beam, which could reduce the circulating power in the interferometer, limiting the sensitivity of the instrument. To suppress these higher-order modes, the core optics of Advanced Virgo is equipped with a thermal compensation system. Phase cameras, monitoring the real-time status of the beam constitute a critical component of this compensation system. These cameras measure the phases and amplitudes of the laser-light fields at the frequencies selected to control the interferometer. The measurement combines heterodyne detection with a scan of the wave front over a photodetector with pin-hole aperture. Three cameras observe the phase front of these laser sidebands. Two of them monitor the in-and output of the interferometer arms and the third one is used in the control of the aberrations introduced by the power recycling cavity. In this paper the working principle of the phase cameras is explained and some characteristic parameters are described.

  8. Communities, Cameras, and Conservation

    ERIC Educational Resources Information Center

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  9. The LSST Camera Overview

    SciTech Connect

    Gilmore, Kirk; Kahn, Steven A.; Nordby, Martin; Burke, David; O'Connor, Paul; Oliver, John; Radeka, Veljko; Schalk, Terry; Schindler, Rafe; /SLAC

    2007-01-10

    The LSST camera is a wide-field optical (0.35-1um) imager designed to provide a 3.5 degree FOV with better than 0.2 arcsecond sampling. The detector format will be a circular mosaic providing approximately 3.2 Gigapixels per image. The camera includes a filter mechanism and, shuttering capability. It is positioned in the middle of the telescope where cross-sectional area is constrained by optical vignetting and heat dissipation must be controlled to limit thermal gradients in the optical beam. The fast, f/1.2 beam will require tight tolerances on the focal plane mechanical assembly. The focal plane array operates at a temperature of approximately -100 C to achieve desired detector performance. The focal plane array is contained within an evacuated cryostat, which incorporates detector front-end electronics and thermal control. The cryostat lens serves as an entrance window and vacuum seal for the cryostat. Similarly, the camera body lens serves as an entrance window and gas seal for the camera housing, which is filled with a suitable gas to provide the operating environment for the shutter and filter change mechanisms. The filter carousel can accommodate 5 filters, each 75 cm in diameter, for rapid exchange without external intervention.

  10. Ultraminiature television camera

    NASA Technical Reports Server (NTRS)

    Deterville, R. J.; Drago, N.

    1967-01-01

    Ultraminiature television camera with a total volume of 20.25 cubic inches, requires 28 vdc power, operates on UHF and accommodates standard 8-mm optics. It uses microelectronic assembly packaging techniques and contains a magnetically deflected and electrostatically focused vidicon, automatic gain control circuit, power supply, and transmitter.

  11. Low-Cost Optical Camera System for Disaster Monitoring

    NASA Astrophysics Data System (ADS)

    Kurz, F.; Meynberg, O.; Rosenbaum, D.; Türmer, S.; Reinartz, P.; Schroeder, M.

    2012-07-01

    Real-time monitoring of natural disasters, mass events, and large accidents with airborne optical sensors is an ongoing topic in research and development. Airborne monitoring is used as a complemental data source with the advantage of flexible data acquisition and higher spatial resolution compared to optical satellite data. In cases of disasters or mass events, optical high resolution image data received directly after acquisition are highly welcomed by security related organizations like police and rescue forces. Low-cost optical camera systems are suitable for real-time applications as the accuracy requirements can be lowered in return for faster processing times. In this paper, the performance of low-cost camera systems for real-time mapping applications is exemplarily evaluated based on already existing sensor systems operated at German Aerospace Center (DLR). Focus lies next to the geometrical and radiometric performance on the real time processing chain which includes image processors, thematic processors for automatic traffic extraction and automatic person tracking, data downlink to the ground station, and further processing and distribution on the ground. Finally, a concept for a national airborne rapid mapping service based on the low-cost hardware is proposed.

  12. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  13. Mars Cameras Make Panoramic Photography a Snap

    NASA Technical Reports Server (NTRS)

    2008-01-01

    If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape. The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels. Gigapixel images are more than 200 times the size captured by today s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.

  14. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  15. The NRL 2011 Airborne Sea-Ice Thickness Campaign

    NASA Astrophysics Data System (ADS)

    Brozena, J. M.; Gardner, J. M.; Liang, R.; Ball, D.; Richter-Menge, J.

    2011-12-01

    In March of 2011, the US Naval Research Laboratory (NRL) performed a study focused on the estimation of sea-ice thickness from airborne radar, laser and photogrammetric sensors. The study was funded by ONR to take advantage of the Navy's ICEX2011 ice-camp /submarine exercise, and to serve as a lead-in year for NRL's five year basic research program on the measurement and modeling of sea-ice scheduled to take place from 2012-2017. Researchers from the Army Cold Regions Research and Engineering Laboratory (CRREL) and NRL worked with the Navy Arctic Submarine Lab (ASL) to emplace a 9 km-long ground-truth line near the ice-camp (see Richter-Menge et al., this session) along which ice and snow thickness were directly measured. Additionally, US Navy submarines collected ice draft measurements under the groundtruth line. Repeat passes directly over the ground-truth line were flown and a grid surrounding the line was also flown to collect altimeter, LiDAR and Photogrammetry data. Five CRYOSAT-2 satellite tracks were underflown, as well, coincident with satellite passage. Estimates of sea ice thickness are calculated assuming local hydrostatic balance, and require the densities of water, ice and snow, snow depth, and freeboard (defined as the elevation of sea ice, plus accumulated snow, above local sea level). Snow thickness is estimated from the difference between LiDAR and radar altimeter profiles, the latter of which is assumed to penetrate any snow cover. The concepts we used to estimate ice thickness are similar to those employed in NASA ICEBRIDGE sea-ice thickness estimation. Airborne sensors used for our experiment were a Reigl Q-560 scanning topographic LiDAR, a pulse-limited (2 nS), 10 GHz radar altimeter and an Applanix DSS-439 digital photogrammetric camera (for lead identification). Flights were conducted on a Twin Otter aircraft from Pt. Barrow, AK, and averaged ~ 5 hours in duration. It is challenging to directly compare results from the swath LiDAR with the

  16. Filter algorithm for airborne LIDAR data

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ma, Hongchao; Wu, Jianwei; Tian, Liqiao; Qiu, Feng

    2007-11-01

    Airborne laser scanning data has become an accepted data source for highly automated acquisition of digital surface models(DSM) as well as for the generation of digital terrain models(DTM). To generate a high quality DTM using LIDAR data, 3D off-terrain points have to be separated from terrain points. Even though most LIDAR system can measure "last-return" data points, these "last-return" point often measure ground clutter like shrubbery, cars, buildings, and the canopy of dense foliage. Consequently, raw LIDAR points must be post-processed to remove these undesirable returns. The degree to which this post processing is successful is critical in determining whether LIDAR is cost effective for large-scale mapping application. Various techniques have been proposed to extract the ground surface from airborne LIDAR data. The basic problem is the separation of terrain points from off-terrain points which are both recorded by the LIDAR sensor. In this paper a new method, combination of morphological filtering and TIN densification, is proposed to separate 3D off-terrain points.

  17. The PAU Camera

    NASA Astrophysics Data System (ADS)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  18. Do Speed Cameras Reduce Collisions?

    PubMed Central

    Skubic, Jeffrey; Johnson, Steven B.; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods – before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not independently affect the incidence of motor vehicle collisions. PMID:24406979

  19. Spatial Modeling and Variability Analysis for Modeling and Prediction of Soil and Crop Canopy Coverage Using Multispectral Imagery from an Airborne Remote Sensing System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Based on a previous study on an airborne remote sensing system with automatic camera stabilization for crop management, multispectral imagery was acquired using the MS-4100 multispectral camera at different flight altitudes over a 115 ha cotton field. After the acquired images were geo-registered an...

  20. Large-format automated pulsed holography camera system

    NASA Astrophysics Data System (ADS)

    Rodin, Alexey M.; Ratcliffe, David B.; Rus, Roman

    2001-04-01

    An automated pulsed holography camera system for ultra-large format display holography has been created. This camera produces reflection and rainbow copies of up to 110 X 150 cm size as well as master holograms of up to 80 X 100 cm size. In addition, the system is capable of generating digital full color transmission rainbow holograms from masters produced by a digital mastering machine. Camera utilizes the single longitudinal mode-phase conjugated laser delivering the pulses of 35 ns duration with maximum energy of 8 J at 526.5 nm wavelength. High output energy have conditioned the use of non-spherical spatial beam filtering in each beam pass. Camera incorporates instant switch-over from copying to mastering modes, permits digital electronic setting of all beam ratios and allows manual tuning of scene illuminating diffusers. Some of the most important applications of this camera are printing of AO format 3D-drawings for advanced virtual prototyping of machines & devices, large format scientific & artistic holography, 3D-posters printing industry of near future.