Sample records for airborne digital camera

  1. High Resolution Airborne Digital Imagery for Precision Agriculture

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley R.

    1998-01-01

    The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).

  2. Multi-sensor fusion over the World Trade Center disaster site

    NASA Astrophysics Data System (ADS)

    Rodarmel, Craig; Scott, Lawrence; Simerlink, Deborah A.; Walker, Jeffrey

    2002-09-01

    The immense size and scope of the rescue and clean-up of the World Trade Center site created a need for data that would provide a total overview of the disaster area. To fulfill this need, the New York State Office for Technology (NYSOFT) contracted with EarthData International to collect airborne remote sensing data over Ground Zero with an airborne light detection and ranging (LIDAR) sensor, a high-resolution digital camera, and a thermal camera. The LIDAR data provided a three-dimensional elevation model of the ground surface that was used for volumetric calculations and also in the orthorectification of the digital images. The digital camera provided high-resolution imagery over the site to aide the rescuers in placement of equipment and other assets. In addition, the digital imagery was used to georeference the thermal imagery and also provided the visual background for the thermal data. The thermal camera aided in the location and tracking of underground fires. The combination of data from these three sensors provided the emergency crews with a timely, accurate overview containing a wealth of information of the rapidly changing disaster site. Because of the dynamic nature of the site, the data was acquired on a daily basis, processed, and turned over to NYSOFT within twelve hours of the collection. During processing, the three datasets were combined and georeferenced to allow them to be inserted into the client's geographic information systems.

  3. Solar-Powered Airplane with Cameras and WLAN

    NASA Technical Reports Server (NTRS)

    Higgins, Robert G.; Dunagan, Steve E.; Sullivan, Don; Slye, Robert; Brass, James; Leung, Joe G.; Gallmeyer, Bruce; Aoyagi, Michio; Wei, Mei Y.; Herwitz, Stanley R.; hide

    2004-01-01

    An experimental airborne remote sensing system includes a remotely controlled, lightweight, solar-powered airplane (see figure) that carries two digital-output electronic cameras and communicates with a nearby ground control and monitoring station via a wireless local-area network (WLAN). The speed of the airplane -- typically <50 km/h -- is low enough to enable loitering over farm fields, disaster scenes, or other areas of interest to collect high-resolution digital imagery that could be delivered to end users (e.g., farm managers or disaster-relief coordinators) in nearly real time.

  4. Photogrammetry and Remote Sensing: New German Standards (din) Setting Quality Requirements of Products Generated by Digital Cameras, Pan-Sharpening and Classification

    NASA Astrophysics Data System (ADS)

    Reulke, R.; Baltrusch, S.; Brunn, A.; Komp, K.; Kresse, W.; von Schönermark, M.; Spreckels, V.

    2012-08-01

    10 years after the first introduction of a digital airborne mapping camera in the ISPRS conference 2000 in Amsterdam, several digital cameras are now available. They are well established in the market and have replaced the analogue camera. A general improvement in image quality accompanied the digital camera development. The signal-to-noise ratio and the dynamic range are significantly better than with the analogue cameras. In addition, digital cameras can be spectrally and radiometrically calibrated. The use of these cameras required a rethinking in many places though. New data products were introduced. In the recent years, some activities took place that should lead to a better understanding of the cameras and the data produced by these cameras. Several projects, like the projects of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) or EuroSDR (European Spatial Data Research), were conducted to test and compare the performance of the different cameras. In this paper the current DIN (Deutsches Institut fuer Normung - German Institute for Standardization) standards will be presented. These include the standard for digital cameras, the standard for ortho rectification, the standard for classification, and the standard for pan-sharpening. In addition, standards for the derivation of elevation models, the use of Radar / SAR, and image quality are in preparation. The OGC has indicated its interest in participating that development. The OGC has already published specifications in the field of photogrammetry and remote sensing. One goal of joint future work could be to merge these formerly independent developments and the joint development of a suite of implementation specifications for photogrammetry and remote sensing.

  5. Get the Picture?

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Positive Systems has worked in conjunction with Stennis Space Center to design the ADAR System 5500. This is a four-band airborne digital imaging system used to capture multispectral imagery similar to that available from satellite platforms such as Landsat, SPOT and the new generation of high resolution satellites. Positive Systems has provided remote sensing services for the development of digital aerial camera systems and software for commercial aerial imaging applications.

  6. Airborne Digital Sensor System and GPS-aided inertial technology for direct geopositioning in rough terrain

    USGS Publications Warehouse

    Sanchez, Richard D.

    2004-01-01

    High-resolution airborne digital cameras with onboard data collection based on the Global Positioning System (GPS) and inertial navigation systems (INS) technology may offer a real-time means to gather accurate topographic map information by reducing ground control and eliminating aerial triangulation. Past evaluations of this integrated system over relatively flat terrain have proven successful. The author uses Emerge Digital Sensor System (DSS) combined with Applanix Corporation?s Position and Orientation Solutions for Direct Georeferencing to examine the positional mapping accuracy in rough terrain. The positional accuracy documented in this study did not meet large-scale mapping requirements owing to an apparent system mechanical failure. Nonetheless, the findings yield important information on a new approach for mapping in Antarctica and other remote or inaccessible areas of the world.

  7. Using Commercial Digital Cameras and Structure-for-Motion Software to Map Snow Cover Depth from Small Aircraft

    NASA Astrophysics Data System (ADS)

    Sturm, M.; Nolan, M.; Larsen, C. F.

    2014-12-01

    A long-standing goal in snow hydrology has been to map snow cover in detail, either mapping snow depth or snow water equivalent (SWE) with sub-meter resolution. Airborne LiDAR and air photogrammetry have been used successfully for this purpose, but both require significant investments in equipment and substantial processing effort. Here we detail a relatively inexpensive and simple airborne photogrammetric technique that can be used to measure snow depth. The main airborne hardware consists of a consumer-grade digital camera attached to a survey-quality, dual-frequency GPS. Photogrammetric processing is done using commercially available Structure from Motion (SfM) software that does not require ground control points. Digital elevation models (DEMs) are made from snow-free acquisitions in the summer and snow-covered acquisitions in winter, and the maps are then differenced to arrive at snow thickness. We tested the accuracy and precision of snow depths measured using this system through 1) a comparison with airborne scanning LiDAR, 2) a comparison of results from two independent and slightly different photogrameteric systems, and 3) comparison to extensive on-the-ground measured snow depths. Vertical accuracy and precision are on the order of +/-30 cm and +/- 8 cm, respectively. The accuracy can be made to approach that of the precision if suitable snow-free ground control points exists and are used to co-register summer to winter DEM maps. Final snow depth accuracy from our series of tests was on the order of ±15 cm. This photogrammetric method substantially lowers the economic and expertise barriers to entry for mapping snow.

  8. Mountain pine beetle detection and monitoring: evaluation of airborne imagery

    NASA Astrophysics Data System (ADS)

    Roberts, A.; Bone, C.; Dragicevic, S.; Ettya, A.; Northrup, J.; Reich, R.

    2007-10-01

    The processing and evaluation of digital airborne imagery for detection, monitoring and modeling of mountain pine beetle (MPB) infestations is evaluated. The most efficient and reliable remote sensing strategy for identification and mapping of infestation stages ("current" to "red" to "grey" attack) of MPB in lodgepole pine forests is determined for the most practical and cost effective procedures. This research was planned to specifically enhance knowledge by determining the remote sensing imaging systems and analytical procedures that optimize resource management for this critical forest health problem. Within the context of this study, airborne remote sensing of forest environments for forest health determinations (MPB) is most suitably undertaken using multispectral digitally converted imagery (aerial photography) at scales of 1:8000 for early detection of current MPB attack and 1:16000 for mapping and sequential monitoring of red and grey attack. Digital conversion should be undertaken at 10 to 16 microns for B&W multispectral imagery and 16 to 24 microns for colour and colour infrared imagery. From an "operational" perspective, the use of twin mapping-cameras with colour and B&W or colour infrared film will provide the best approximation of multispectral digital imagery with near comparable performance in a competitive private sector context (open bidding).

  9. Design and development of an airborne multispectral imaging system

    NASA Astrophysics Data System (ADS)

    Kulkarni, Rahul R.; Bachnak, Rafic; Lyle, Stacey; Steidley, Carl W.

    2002-08-01

    Advances in imaging technology and sensors have made airborne remote sensing systems viable for many applications that require reasonably good resolution at low cost. Digital cameras are making their mark on the market by providing high resolution at very high rates. This paper describes an aircraft-mounted imaging system (AMIS) that is being designed and developed at Texas A&M University-Corpus Christi (A&M-CC) with the support of a grant from NASA. The approach is to first develop and test a one-camera system that will be upgraded into a five-camera system that offers multi-spectral capabilities. AMIS will be low cost, rugged, portable and has its own battery power source. Its immediate use will be to acquire images of the Coastal area in the Gulf of Mexico for a variety of studies covering vast spectra from near ultraviolet region to near infrared region. This paper describes AMIS and its characteristics, discusses the process for selecting the major components, and presents the progress.

  10. SITHON: An Airborne Fire Detection System Compliant with Operational Tactical Requirements

    PubMed Central

    Kontoes, Charalabos; Keramitsoglou, Iphigenia; Sifakis, Nicolaos; Konstantinidis, Pavlos

    2009-01-01

    In response to the urging need of fire managers for timely information on fire location and extent, the SITHON system was developed. SITHON is a fully digital thermal imaging system, integrating INS/GPS and a digital camera, designed to provide timely positioned and projected thermal images and video data streams rapidly integrated in the GIS operated by Crisis Control Centres. This article presents in detail the hardware and software components of SITHON, and demonstrates the first encouraging results of test flights over the Sithonia Peninsula in Northern Greece. It is envisaged that the SITHON system will be soon operated onboard various airborne platforms including fire brigade airplanes and helicopters as well as on UAV platforms owned and operated by the Greek Air Forces. PMID:22399963

  11. Recent improvements in hydrometeor sampling using airborne holography

    NASA Astrophysics Data System (ADS)

    Stith, J. L.; Bansemer, A.; Glienke, S.; Shaw, R. A.; Aquino, J.; Fugal, J. P.

    2017-12-01

    Airborne digital holography provides a new technique to study the sizes, shapes and locations of hydrometeors. Airborne holographic cameras are able to capture more optical information than traditional airborne hydrometeor instruments, which allows for more detailed information, such as the location and shape of individual hydrometeors over a relatively wide range of sizes. These cameras can be housed in an anti-shattering probe arm configuration, which minimizes the effects of probe tip shattering. Holographic imagery, with its three dimensional view of hydrometeor spacing, is also well suited to detecting shattering events when present. A major problem with digital holographic techniques has been the amount of machine time and human analysis involved in analyzing holographic data. Here, we present some recent examples showing how holographic analysis can improve our measurements of liquid and ice particles and we describe a format we have developed for routine archiving of Holographic data, so that processed results can be utilized more routinely by a wider group of investigators. We present a side-by-side comparison of the imagery obtained from holographic reconstruction of ice particles from a holographic camera (HOLODEC) with imagery from a 3VCPI instrument, which utilizes a tube-based sampling geometry. Both instruments were carried on the NSF/NCAR GV aircraft. In a second application of holographic imaging, we compare measurements of cloud droplets from a Cloud Droplet Probe (CDP) with simultaneous measurements from HOLODEC. In some cloud regions the CDP data exhibits a bimodal size distribution, while the more local data from HOLODEC suggests that two mono-modal size distributions are present in the cloud and that the bimodality observed in the CDP is due to the averaging length. Thus, the holographic techniques have the potential to improve our understanding of the warm rain process in future airborne field campaigns. The development of this instrument has been a university and national lab collaboration. Progress in automating the processing techniques has now reached a stage where processed data can be made readily available, so that holographic data from a field campaign can be utilized by a wider group of investigators.

  12. Airborne multicamera system for geo-spatial applications

    NASA Astrophysics Data System (ADS)

    Bachnak, Rafic; Kulkarni, Rahul R.; Lyle, Stacey; Steidley, Carl W.

    2003-08-01

    Airborne remote sensing has many applications that include vegetation detection, oceanography, marine biology, geographical information systems, and environmental coastal science analysis. Remotely sensed images, for example, can be used to study the aftermath of episodic events such as the hurricanes and floods that occur year round in the coastal bend area of Corpus Christi. This paper describes an Airborne Multi-Spectral Imaging System that uses digital cameras to provide high resolution at very high rates. The software is based on Delphi 5.0 and IC Imaging Control's ActiveX controls. Both time and the GPS coordinates are recorded. Three successful test flights have been conducted so far. The paper present flight test results and discusses the issues being addressed to fully develop the system.

  13. Development of a highly automated system for the remote evaluation of individual tree parameters

    Treesearch

    Richard Pollock

    2000-01-01

    A highly-automated procedure for remotely estimating individual tree location, crown diameter, species class, and height has been developed. This procedure will involve the use of a multimodal airborne sensing system that consists of a digital frame camera, a scanning laser rangefinder, and a position and orientation measurement system. Data from the multimodal sensing...

  14. Target-Tracking Camera for a Metrology System

    NASA Technical Reports Server (NTRS)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  15. Automated geo/ortho registered aerial imagery product generation using the mapping system interface card (MSIC)

    NASA Astrophysics Data System (ADS)

    Bratcher, Tim; Kroutil, Robert; Lanouette, André; Lewis, Paul E.; Miller, David; Shen, Sylvia; Thomas, Mark

    2013-05-01

    The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.

  16. EAARL coastal topography--Alligator Point, Louisiana, 2010

    USGS Publications Warehouse

    Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Wright, C.W.; Brock, J.C.; Nagle, D.B.; Vivekanandan, Saisudha; Fredericks, Xan; Barras, J.A.

    2012-01-01

    This project provides highly detailed and accurate datasets of a portion of Alligator Point, Louisiana, acquired on March 5 and 6, 2010. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the National Aeronautics and Space Administration (NASA) Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations.

  17. Analyzing RCD30 Oblique Performance in a Production Environment

    NASA Astrophysics Data System (ADS)

    Soler, M. E.; Kornus, W.; Magariños, A.; Pla, M.

    2016-06-01

    In 2014 the Institut Cartogràfic i Geològic de Catalunya (ICGC) decided to incorporate digital oblique imagery in its portfolio in response to the growing demand for this product. The reason can be attributed to its useful applications in a wide variety of fields and, most recently, to an increasing interest in 3d modeling. The selection phase for a digital oblique camera led to the purchase of the Leica RCD30 Oblique system, an 80MPixel multispectral medium-format camera which consists of one Nadir camera and four oblique viewing cameras acquiring images at an off-Nadir angle of 35º. The system also has a multi-directional motion compensation on-board system to deliver the highest image quality. The emergence of airborne oblique cameras has run in parallel to the inclusion of computer vision algorithms into the traditional photogrammetric workflows. Such algorithms rely on having multiple views of the same area of interest and take advantage of the image redundancy for automatic feature extraction. The multiview capability is highly fostered by the use of oblique systems which capture simultaneously different points of view for each camera shot. Different companies and NMAs have started pilot projects to assess the capabilities of the 3D mesh that can be obtained using correlation techniques. Beyond a software prototyping phase, and taking into account the currently immature state of several components of the oblique imagery workflow, the ICGC has focused on deploying a real production environment with special interest on matching the performance and quality of the existing production lines based on classical Nadir images. This paper introduces different test scenarios and layouts to analyze the impact of different variables on the geometric and radiometric performance. Different variables such as flight altitude, side and forward overlap and ground control point measurements and location have been considered for the evaluation of aerial triangulation and stereo plotting. Furthermore, two different flight configurations have been designed to measure the quality of the absolute radiometric calibration and the resolving power of the system. To quantify the effective resolution power of RCD30 Oblique images, a tool based on the computation of the Line Spread Function has been developed. The tool processes a region of interest that contains a single contour in order to extract a numerical measure of edge smoothness for a same flight session. The ICGC is highly devoted to derive information from satellite and airborne multispectral remote sensing imagery. A seamless Normalized Difference Vegetation Index (NDVI) retrieved from Digital Metric Camera (DMC) reflectance imagery is one of the products of ICGC's portfolio. As an evolution of this well-defined product, this paper presents an evaluation of the absolute radiometric calibration of the RCD30 Oblique sensor. To assess the quality of the measure, the ICGC has developed a procedure based on simultaneous acquisition of RCD30 Oblique imagery and radiometric calibrated AISA (Airborne Hyperspectral Imaging System) imagery.

  18. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  19. Application of terrestrial 'structure-from-motion' photogrammetry on a medium-size Arctic valley glacier: potential, accuracy and limitations

    NASA Astrophysics Data System (ADS)

    Hynek, Bernhard; Binder, Daniel; Boffi, Geo; Schöner, Wolfgang; Verhoeven, Geert

    2014-05-01

    Terrestrial photogrammetry was the standard method for mapping high mountain terrain in the early days of mountain cartography, until it was replaced by aerial photogrammetry and airborne laser scanning. Modern low-price digital single-lens reflex (DSLR) cameras and highly automatic and cheap digital computer vision software with automatic image matching and multiview-stereo routines suggest the rebirth of terrestrial photogrammetry, especially in remote regions, where airborne surveying methods are expensive due to high flight costs. Terrestrial photogrammetry and modern automated image matching is widely used in geodesy, however, its application in glaciology is still rare, especially for surveying ice bodies at the scale of some km², which is typical for valley glaciers. In August 2013 a terrestrial photogrammetric survey was carried out on Freya Glacier, a 6km² valley glacier next to Zackenberg Research Station in NE-Greenland, where a detailed glacier mass balance monitoring was initiated during the last IPY. Photos with a consumer grade digital camera (Nikon D7100) were taken from the ridges surrounding the glacier. To create a digital elevation model, the photos were processed with the software photoscan. A set of ~100 dGPS surveyed ground control points on the glacier surface was used to georeference and validate the final DEM. Aim of this study was to produce a high resolution and high accuracy DEM of the actual surface topography of the Freya glacier catchment with a novel approach and to explore the potential of modern low-cost terrestrial photogrammetry combined with state-of-the-art automated image matching and multiview-stereo routines for glacier monitoring and to communicate this powerful and cheap method within the environmental research and glacier monitoring community.

  20. Airborne ballistic camera tracking systems

    NASA Technical Reports Server (NTRS)

    Redish, W. L.

    1976-01-01

    An operational airborne ballistic camera tracking system was tested for operational and data reduction feasibility. The acquisition and data processing requirements of the system are discussed. Suggestions for future improvements are also noted. A description of the data reduction mathematics is outlined. Results from a successful reentry test mission are tabulated. The test mission indicated that airborne ballistic camera tracking systems are feasible.

  1. EAARL Coastal Topography - Northeast Barrier Islands 2007: Bare Earth

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Sallenger, A.H.; Wright, C. Wayne; Yates, Xan; Bonisteel, Jamie M.

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced collaboratively by the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the northeast coastal barrier islands in New York and New Jersey, acquired April 29-30 and May 15-16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  2. EAARL Topography - Natchez Trace Parkway 2007: First Surface

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Natchez Trace Parkway in Mississippi, acquired on September 14, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  3. EAARL Topography - Vicksburg National Military Park 2008: Bare Earth

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Vicksburg National Military Park in Mississippi, acquired on March 6, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  4. EAARL Coastal Topography - Northeast Barrier Islands 2007: First Surface

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Sallenger, A.H.; Wright, C. Wayne; Yates, Xan; Bonisteel, Jamie M.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced collaboratively by the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the northeast coastal barrier islands in New York and New Jersey, acquired April 29-30 and May 15-16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  5. EAARL Topography-Vicksburg National Military Park 2007: First Surface

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first-surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Vicksburg National Military Park in Mississippi, acquired on September 12, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  6. EAARL Coastal Topography--Cape Canaveral, Florida, 2009: First Surface

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Plant, Nathaniel; Wright, C.W.; Nagle, D.B.; Serafin, K.S.; Klipp, E.S.

    2011-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Kennedy Space Center, FL. This project provides highly detailed and accurate datasets of a portion of the eastern Florida coastline beachface, acquired on May 28, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations.

  7. EAARL Coastal Topography - Sandy Hook 2007

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Bonisteel, Jamie M.

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of Gateway National Recreation Area's Sandy Hook Unit in New Jersey, acquired on May 16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for pre-survey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  8. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    NASA Astrophysics Data System (ADS)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic errors were modeled by analyzing residuals using correction grid. The results of the final bundle adjustments are sufficient to enable Sanborn to produce DEM/DTM and orthophotos from the nadir imagery and create 3D models using georeferenced oblique imagery.

  9. Identification and extraction of the seaward edge of terrestrial vegetation using digital aerial photography

    USGS Publications Warehouse

    Harris, Melanie; Brock, John C.; Nayegandhi, A.; Duffy, M.; Wright, C.W.

    2006-01-01

    This report is created as part of the Aerial Data Collection and Creation of Products for Park Vital Signs Monitoring within the Northeast Region Coastal and Barrier Network project, which is a joint project between the National Park Service Inventory and Monitoring Program (NPS-IM), the National Aeronautics and Space Administration (NASA) Observational Sciences Branch, and the U.S. Geological Survey (USGS) Center for Coastal and Watershed Studies (CCWS). This report is one of a series that discusses methods for extracting topographic features from aerial survey data. It details step-by-step methods used to extract a spatially referenced digital line from aerial photography that represents the seaward edge of terrestrial vegetation along the coast of Assateague Island National Seashore (ASIS). One component of the NPS-IM/USGS/NASA project includes the collection of NASA aerial surveys over various NPS barrier islands and coastal parks throughout the National Park Service's Northeast Region. These aerial surveys consist of collecting optical remote sensing data from a variety of sensors, including the NASA Airborne Topographic Mapper (ATM), the NASA Experimental Advanced Airborne Research Lidar (EAARL), and down-looking digital mapping cameras.

  10. Measurement Sets and Sites Commonly Used for High Spatial Resolution Image Product Characterization

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    Scientists within NASA's Applied Sciences Directorate have developed a well-characterized remote sensing Verification & Validation (V&V) site at the John C. Stennis Space Center (SSC). This site has enabled the in-flight characterization of satellite high spatial resolution remote sensing system products form Space Imaging IKONOS, Digital Globe QuickBird, and ORBIMAGE OrbView, as well as advanced multispectral airborne digital camera products. SSC utilizes engineered geodetic targets, edge targets, radiometric tarps, atmospheric monitoring equipment and their Instrument Validation Laboratory to characterize high spatial resolution remote sensing data products. This presentation describes the SSC characterization capabilities and techniques in the visible through near infrared spectrum and examples of calibration results.

  11. QWIP technology for both military and civilian applications

    NASA Astrophysics Data System (ADS)

    Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.

    2001-10-01

    Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.

  12. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  13. EAARL Topography - George Washington Birthplace National Monument 2008

    USGS Publications Warehouse

    Brock, John C.; Nayegandhi, Amar; Wright, C. Wayne; Stevens, Sara; Yates, Xan

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) and first surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the George Washington Birthplace National Monument in Virginia, acquired on March 26, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  14. EAARL Coastal Topography - Northern Gulf of Mexico, 2007: First Surface

    USGS Publications Warehouse

    Smith, Kathryn E.L.; Nayegandhi, Amar; Wright, C. Wayne; Bonisteel, Jamie M.; Brock, John C.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) elevation data were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. The project provides highly detailed and accurate datasets of select barrier islands and peninsular regions of Louisiana, Mississippi, Alabama, and Florida, acquired June 27-30, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  15. EAARL Coastal Topography-Pearl River Delta 2008: Bare Earth

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Miner, Michael D.; Yates, Xan; Bonisteel, Jamie M.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the University of New Orleans (UNO), Pontchartrain Institute for Environmental Sciences (PIES), New Orleans, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Pearl River Delta in Louisiana and Mississippi, acquired March 9-11, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  16. EAARL Coastal Topography-Pearl River Delta 2008: First Surface

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Miner, Michael D.; Michael, D.; Yates, Xan; Bonisteel, Jamie M.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the University of New Orleans (UNO), Pontchartrain Institute for Environmental Sciences (PIES), New Orleans, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Pearl River Delta in Louisiana and Mississippi, acquired March 9-11, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  17. EAARL Topography - Jean Lafitte National Historical Park and Preserve 2006

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) and bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Jean Lafitte National Historical Park and Preserve in Louisiana, acquired on September 22, 2006. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  18. EAARL Coastal Topography - Northern Gulf of Mexico, 2007: Bare Earth

    USGS Publications Warehouse

    Smith, Kathryn E.L.; Nayegandhi, Amar; Wright, C. Wayne; Bonisteel, Jamie M.; Brock, John C.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. The purpose of this project is to provide highly detailed and accurate datasets of select barrier islands and peninsular regions of Louisiana, Mississippi, Alabama, and Florida, acquired on June 27-30, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  19. EAARL Submerged Topography - U.S. Virgin Islands 2003

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Bonisteel, Jamie M.

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived submerged topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), South Florida-Caribbean Network, Miami, FL; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate bathymetric datasets of a portion of the U.S. Virgin Islands, acquired on April 21, 23, and 30, May 2, and June 14 and 17, 2003. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  20. EAARL Coastal Topography-Cape Hatteras National Seashore, North Carolina, Post-Nor'Ida, 2009: Bare Earth

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Fredericks, Xan; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Stevens, Sara

    2011-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the National Park Service Southeast Coast Network's Cape Hatteras National Seashore in North Carolina, acquired post-Nor'Ida (November 2009 nor'easter) on November 27 and 29 and December 1, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  1. EAARL coastal topography and imagery–Western Louisiana, post-Hurricane Rita, 2005: First surface

    USGS Publications Warehouse

    Bonisteel-Cormier, Jamie M.; Wright, Wayne C.; Fredericks, Alexandra M.; Klipp, Emily S.; Nagle, Doug B.; Sallenger, Asbury H.; Brock, John C.

    2013-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, Virginia. This project provides highly detailed and accurate datasets of a portion of the Louisiana coastline beachface, acquired post-Hurricane Rita on September 27-28 and October 2, 2005. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the National Aeronautics and Space Administration (NASA) Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Lidar for Science and Resource Management Website.

  2. EAARL Coastal Topography-Maryland and Delaware, Post-Nor'Ida, 2009

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Vivekanandan, Saisudha; Nayegandhi, Amar; Sallenger, A.H.; Wright, C.W.; Brock, J.C.; Nagle, D.B.; Klipp, E.S.

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL. This project provides highly detailed and accurate datasets of a portion of the eastern Maryland and Delaware coastline beachface, acquired post-Nor'Ida (November 2009 nor'easter) on November 28 and 30, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  3. EAARL Coastal Topography-Eastern Louisiana Barrier Islands, Post-Hurricane Gustav, 2008: First Surface

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Wright, C.W.; Sallenger, A.H.; Brock, J.C.; Nagle, D.B.; Vivekanandan, Saisudha; Fredericks, Xan

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the eastern Louisiana barrier islands, acquired post-Hurricane Gustav (September 2008 hurricane) on September 6 and 7, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  4. EAARL coastal topography-Cape Hatteras National Seashore, North Carolina, post-Nor'Ida, 2009: first surface

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the National Park Service Southeast Coast Network's Cape Hatteras National Seashore in North Carolina, acquired post-Nor'Ida (November 2009 nor'easter) on November 27 and 29 and December 1, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  5. EAARL Coastal Topography-Mississippi and Alabama Barrier Islands, Post-Hurricane Gustav, 2008

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Wright, C.W.; Sallenger, A.H.; Brock, J.C.; Nagle, D.B.; Klipp, E.S.; Vivekanandan, Saisudha; Fredericks, Xan; Segura, Martha

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Mississippi and Alabama barrier islands, acquired post-Hurricane Gustav (September 2008 hurricane) on September 8, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  6. EAARL Coastal Topography and Imagery-Assateague Island National Seashore, Maryland and Virginia, Post-Nor'Ida, 2009

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Klipp, E.S.; Vivekanandan, Saisudha; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Assateague Island National Seashore in Maryland and Virginia, acquired post-Nor'Ida (November 2009 nor'easter) on November 28 and 30, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar(EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  7. EAARL Coastal Topography-Fire Island National Seashore, New York, Post-Nor'Ida, 2009

    USGS Publications Warehouse

    Nayegandhi, Amar; Vivekanandan, Saisudha; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Bonisteel-Cormier, J.M.; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Fire Island National Seashore in New York, acquired post-Nor'Ida (November 2009 nor'easter) on December 4, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  8. EAARL coastal topography and imagery-Fire Island National Seashore, New York, 2009

    USGS Publications Warehouse

    Vivekanandan, Saisudha; Klipp, E.S.; Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Fire Island National Seashore in New York, acquired on July 9 and August 3, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  9. EAARL Coastal Topography-Chandeleur Islands, Louisiana, 2010: Bare Earth

    USGS Publications Warehouse

    Nayegandhi, Amar; Bonisteel-Cormier, Jamie M.; Brock, John C.; Sallenger, A.H.; Wright, C. Wayne; Nagle, David B.; Vivekanandan, Saisudha; Yates, Xan; Klipp, Emily S.

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and submerged topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Chandeleur Islands, acquired March 3, 2010. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  10. EAARL Coastal Topography-Eastern Florida, Post-Hurricane Jeanne, 2004: First Surface

    USGS Publications Warehouse

    Fredericks, Xan; Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Wright, C.W.; Sallenger, A.H.; Brock, J.C.; Klipp, E.S.; Nagle, D.B.

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the eastern Florida coastline beachface, acquired post-Hurricane Jeanne (September 2004 hurricane) on October 1, 2004. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  11. EAARL Coastal Topography-Sandy Hook Unit, Gateway National Recreation Area, New Jersey, Post-Nor'Ida, 2009

    USGS Publications Warehouse

    Nayegandhi, Amar; Vivekanandan, Saisudha; Brock, J.C.; Wright, C.W.; Bonisteel-Cormier, J.M.; Nagle, D.B.; Klipp, E.S.; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Sandy Hook Unit of Gateway National Recreation Area in New Jersey, acquired post-Nor'Ida (November 2009 nor'easter) on December 4, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  12. EAARL Coastal Topography and Imagery-Naval Live Oaks Area, Gulf Islands National Seashore, Florida, 2007

    USGS Publications Warehouse

    Nagle, David B.; Nayegandhi, Amar; Yates, Xan; Brock, John C.; Wright, C. Wayne; Bonisteel, Jamie M.; Klipp, Emily S.; Segura, Martha

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) topography, first-surface (FS) topography, and canopy-height (CH) datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Science Center, St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Naval Live Oaks Area in Florida's Gulf Islands National Seashore, acquired June 30, 2007. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  13. EAARL Coastal Topography - Fire Island National Seashore 2007

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Bonisteel, Jamie M.

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) and bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of Fire Island National Seashore in New York, acquired on April 29-30 and May 15-16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for pre-survey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  14. EAARL Coastal Topography-Assateague Island National Seashore, 2008: Bare Earth

    USGS Publications Warehouse

    Bonisteel, Jamie M.; Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Klipp, Emily S.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Assateague Island National Seashore in Maryland and Virginia, acquired March 24-25, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for pre-survey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  15. EAARL Coastal Topography-Assateague Island National Seashore, 2008: First Surface

    USGS Publications Warehouse

    Bonisteel, Jamie M.; Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Klipp, Emily S.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Assateague Island National Seashore in Maryland and Virginia, acquired March 24-25, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for pre-survey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  16. NASA IceBridge: Scientific Insights from Airborne Surveys of the Polar Sea Ice Covers

    NASA Astrophysics Data System (ADS)

    Richter-Menge, J.; Farrell, S. L.

    2015-12-01

    The NASA Operation IceBridge (OIB) airborne sea ice surveys are designed to continue a valuable series of sea ice thickness measurements by bridging the gap between NASA's Ice, Cloud and Land Elevation Satellite (ICESat), which operated from 2003 to 2009, and ICESat-2, which is scheduled for launch in 2017. Initiated in 2009, OIB has conducted campaigns over the western Arctic Ocean (March/April) and Southern Oceans (October/November) on an annual basis when the thickness of sea ice cover is nearing its maximum. More recently, a series of Arctic surveys have also collected observations in the late summer, at the end of the melt season. The Airborne Topographic Mapper (ATM) laser altimeter is one of OIB's primary sensors, in combination with the Digital Mapping System digital camera, a Ku-band radar altimeter, a frequency-modulated continuous-wave (FMCW) snow radar, and a KT-19 infrared radiation pyrometer. Data from the campaigns are available to the research community at: http://nsidc.org/data/icebridge/. This presentation will summarize the spatial and temporal extent of the OIB campaigns and their complementary role in linking in situ and satellite measurements, advancing observations of sea ice processes across all length scales. Key scientific insights gained on the state of the sea ice cover will be highlighted, including snow depth, ice thickness, surface roughness and morphology, and melt pond evolution.

  17. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    NASA Astrophysics Data System (ADS)

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent/divergent ice zones, (ii) provide datasets that support enhanced parameterizations in numerical models as well as model initialization and validation, (iii) parameters of interest to Arctic stakeholders for marine navigation and ice engineering studies, and (iv) statistics that support algorithm development for the next-generation of airborne and satellite altimeters, including NASA's ICESat-2 mission. We describe the potential contribution our results can make towards the improvement of coupled ice-ocean numerical models, and discuss how data synthesis and integration with high-resolution models may improve our understanding of sea ice variability and our capabilities in predicting the future state of the ice pack.

  18. EAARL-B submerged topography: Barnegat Bay, New Jersey, post-Hurricane Sandy, 2012-2013

    USGS Publications Warehouse

    Wright, C. Wayne; Troche, Rodolfo J.; Kranenburg, Christine J.; Klipp, Emily S.; Fredericks, Xan; Nagle, David B.

    2014-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived submerged topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida. This project provides highly detailed and accurate datasets for part of Barnegat Bay, New Jersey, acquired post-Hurricane Sandy on November 1, 5, 16, 20, and 30, 2012; December 5, 6, and 21, 2012; and January 10, 2013. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar system, known as the second-generation Experimental Advanced Airborne Research Lidar (EAARL-B), was used during data acquisition. The EAARL-B system is a raster-scanning, waveform-resolving, green-wavelength (532-nm) lidar designed to map nearshore bathymetry, topography, and vegetation structure simultaneously. The EAARL-B sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, down-looking red-green-blue (RGB) and infrared (IR) digital cameras, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL-B platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL-B system. The resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed originally in a NASA-USGS collaboration. The exploration and processing of lidar data in an interactive or batch mode is supported using ALPS. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. The Airborne Lidar Processing System (ALPS) is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Lidar for Science and Resource Management Web site.

  19. Assessment of Photogrammetry Structure-from-Motion Compared to Terrestrial LiDAR Scanning for Generating Digital Elevation Models. Application to the Austre Lovéenbreen Polar Glacier Basin, Spitsbergen 79°N

    NASA Astrophysics Data System (ADS)

    Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.

    2014-12-01

    Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.

  20. EAARL coastal topography-western Florida, post-Hurricane Charley, 2004: seamless (bare earth and submerged.

    USGS Publications Warehouse

    Nayegandhi, Amar; Bonisteel, Jamie M.; Wright, C. Wayne; Sallenger, A.H.; Brock, John C.; Yates, Xan

    2010-01-01

    Project Description These remotely sensed, geographically referenced elevation measurements of lidar-derived seamless (bare-earth and submerged) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Coastal and Marine Geology Program (CMGP), St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the western Florida coastline beachface, acquired post-Hurricane Charley on August 17 and 18, 2004. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website. Selected References Brock, J.C., Wright, C.W., Sallenger, A.H., Krabill, W.B., and Swift, R.N., 2002, Basis and methods of NASA airborne topographic mapper Lidar surveys for coastal studies: Journal of Coastal Research, v. 18, no. 1, p. 1-13. Crane, Michael, Clayton, Tonya, Raabe, Ellen, Stoker, Jason, Handley, Larry, Bawden, Gerald, Morgan, Karen, and Queija, Vivian, 2004, Report of the U.S. Geological Survey Lidar workshop sponsored by the Land Remote Sensing Program and held in St. Petersburg, FL, November 2002: U.S. Geological Survey Open-File Report 2004-1456, 72 p. Nayegandhi, Amar, Brock, J.C., and Wright, C.W., 2009, Small-footprint, waveform-resolving Lidar estimation of submerged and sub-canopy topography in coastal environments: International Journal of Remote Sensing, v. 30, no. 4, p. 861-878. Sallenger, A.H., Wright, C.W., and Lillycrop, Jeff, 2005, Coastal impacts of the 2004 hurricanes measured with airborne Lidar; initial results: Shore and Beach, v. 73, nos. 2-3, p. 10-14. Resources Included Readme.txt File

  1. EAARL-B coastal topography: eastern New Jersey, Hurricane Sandy, 2012: first surface

    USGS Publications Warehouse

    Wright, C. Wayne; Fredericks, Xan; Troche, Rodolfo J.; Klipp, Emily S.; Kranenburg, Christine J.; Nagle, David B.

    2014-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida. This project provides highly detailed and accurate datasets for a portion of the New Jersey coastline beachface, acquired pre-Hurricane Sandy on October 26, and post-Hurricane Sandy on November 1 and November 5, 2012. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar system, known as the second-generation Experimental Advanced Airborne Research Lidar (EAARL-B), was used during data acquisition. The EAARL-B system is a raster-scanning, waveform-resolving, green-wavelength (532-nm) lidar designed to map nearshore bathymetry, topography, and vegetation structure simultaneously. The EAARL-B sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, down-looking red-green-blue (RGB) and infrared (IR) digital cameras, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL-B platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL-B system. The resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Lidar for Science and Resource Management Web site.

  2. EAARL Coastal Topography - Northern Gulf of Mexico

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Sallenger, Abby; Wright, C. Wayne; Travers, Laurinda J.; Lebonitte, James

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived coastal topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. One objective of this research is to create techniques to survey areas for the purposes of geomorphic change studies following major storm events. The USGS Coastal and Marine Geology Program's National Assessment of Coastal Change Hazards project is a multi-year undertaking to identify and quantify the vulnerability of U.S. shorelines to coastal change hazards such as effects of severe storms, sea-level rise, and shoreline erosion and retreat. Airborne Lidar surveys conducted during periods of calm weather are compared to surveys collected following extreme storms in order to quantify the resulting coastal change. Other applications of high-resolution topography include habitat mapping, ecological monitoring, volumetric change detection, and event assessment. The purpose of this project is to provide highly detailed and accurate datasets of the northern Gulf of Mexico coastal areas, acquired on September 19, 2004, immediately following Hurricane Ivan. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Airborne Advanced Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532 nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking RGB (red-green-blue) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system on September 19, 2004. The survey resulted in the acquisition of 3.2 gigabytes of data. The data were processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for pre-survey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of 'last return' elevations.

  3. Validation of Suomi-NPP VIIRS sea ice concentration with very high-resolution satellite and airborne camera imagery

    NASA Astrophysics Data System (ADS)

    Baldwin, Daniel; Tschudi, Mark; Pacifici, Fabio; Liu, Yinghui

    2017-08-01

    Two independent VIIRS-based Sea Ice Concentration (SIC) products are validated against SIC as estimated from Very High Spatial Resolution Imagery for several VIIRS overpasses. The 375 m resolution VIIRS SIC from the Interface Data Processing Segment (IDPS) SIC algorithm is compared against estimates made from 2 m DigitalGlobe (DG) WorldView-2 imagery and also against estimates created from 10 cm Digital Mapping System (DMS) camera imagery. The 750 m VIIRS SIC from the Enterprise SIC algorithm is compared against DG imagery. The IDPS vs. DG comparisons reveal that, due to algorithm issues, many of the IDPS SIC retrievals were falsely assigned ice-free values when the pixel was clearly over ice. These false values increased the validation bias and RMS statistics. The IDPS vs. DMS comparisons were largely over ice-covered regions and did not demonstrate the false retrieval issue. The validation results show that products from both the IDPS and Enterprise algorithms were within or very close to the 10% accuracy (bias) specifications in both the non-melting and melting conditions, but only products from the Enterprise algorithm met the 25% specifications for the uncertainty (RMS).

  4. Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.

    2012-01-01

    As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.

  5. Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing

    NASA Astrophysics Data System (ADS)

    Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.

    2016-06-01

    In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).

  6. An integrated compact airborne multispectral imaging system using embedded computer

    NASA Astrophysics Data System (ADS)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  7. Development of an Airborne High Resolution TV System (AHRTS)

    DTIC Science & Technology

    1975-11-01

    GOVT ACCESSION NO READ INSTRUCTIONS BEFORE COMPLETING FORM JP RECIPIENT’S CATALOG NUMBER DEVELOPMENT OF AN ^IRBORNE HIGH JESOLUTION TV SYSTEM...c. Sytem Elements The essential Airborne Subsystem elements of camera, video tape recorder, transmitter and antennas are required to have...The camera operated over the 3000:1 light change as required. A solar shutter was Incorporated to protect the vidicon from damage from direct view

  8. Generating High resolution surfaces from images: when photogrammetry and applied geophysics meets

    NASA Astrophysics Data System (ADS)

    Bretar, F.; Pierrot-Deseilligny, M.; Schelstraete, D.; Martin, O.; Quernet, P.

    2012-04-01

    Airborne digital photogrammetry has been used for some years to create digital models of the Earth's topography from calibrated cameras. But, in the recent years, the use of non-professionnal digital cameras has become valuable to reconstruct topographic surfaces. Today, the multi megapixel resolution of non-professionnal digital cameras, either used in a close range configuration or from low altitude flights, provide a ground pixel size of respectively a fraction of millimeters to couple of centimeters. Such advances turned into reality because the data processing chain made a tremendous break through during the last five years. This study investigates the potential of the open source software MICMAC developed by the French National Survey IGN (http://www.micmac.ign.fr) to calibrate unoriented digital images and calculate surface models of extremely high resolution for Earth Science purpose. We would like to report two experiences performed in 2011. The first has been performed in the context of risk assessment of rock falls and landslides along the cliffs of Normandy seashore. The acquisition protocol for the first site of "Criel-sur-Mer" has been very simple: a walk along the chalk vertical cliffs taking photos with a focal of 18mm every approx. 50m with an overlap of 80% allowed to generate 2.5km of digital surface at centimeter resolution. The site of "Les Vaches Noires" has been more complicated to acquire because of both the geology (dark clays) and the geometry (the landslide direction is parallel to the seashore and has a high field depth from the shore). We therefore developed an innovative device mounted on board of an autogyre (in-between ultralight power driven aircraft and helicopter). The entire area has been surveyed with a focal of 70mm at 400m asl with a ground pixel of 3cm. MICMAC gives the possibility to directly georeference digital Model. Here, it has been performed by a net of wireless GPS called Geocubes, also developed at IGN. The second experience is a part of field measurements performed over the flanks of the volcano Piton de la Fournaise, La Réunion island. In order to characterize the roughness of different type of lava flows, extremely high resolution Digital Terrain Models (0.6mm) have been generated with MICMAC. The use of such high definition topography made the characterization possible through the calculation of the correlation length, the standard deviation and the fractal dimension. To conclude, we will sketch a synthesis of the need of geoscientists vs. the optimal resolution of digital topographic data.

  9. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  10. SWUIS-A: A Versatile, Low-Cost UV/VIS/IR Imaging System for Airborne Astronomy and Aeronomy Research

    NASA Technical Reports Server (NTRS)

    Durda, Daniel D.; Stern, S. Alan; Tomlinson, William; Slater, David C.; Vilas, Faith

    2001-01-01

    We have developed and successfully flight-tested on 14 different airborne missions the hardware and techniques for routinely conducting valuable astronomical and aeronomical observations from high-performance, two-seater military-type aircraft. The SWUIS-A (Southwest Universal Imaging System - Airborne) system consists of an image-intensified CCD camera with broad band response from the near-UV to the near IR, high-quality foreoptics, a miniaturized video recorder, an aircraft-to-camera power and telemetry interface with associated camera controls, and associated cables, filters, and other minor equipment. SWUIS-A's suite of high-quality foreoptics gives it selectable, variable focal length/variable field-of-view capabilities. The SWUIS-A camera frames at 60 Hz video rates, which is a key requirement for both jitter compensation and high time resolution (useful for occultation, lightning, and auroral studies). Broadband SWUIS-A image coadds can exceed a limiting magnitude of V = 10.5 in <1 sec with dark sky conditions. A valuable attribute of SWUIS-A airborne observations is the fact that the astronomer flies with the instrument, thereby providing Space Shuttle-like "payload specialist" capability to "close-the-loop" in real-time on the research done on each research mission. Key advantages of the small, high-performance aircraft on which we can fly SWUIS-A include significant cost savings over larger, more conventional airborne platforms, worldwide basing obviating the need for expensive, campaign-style movement of specialized large aircraft and their logistics support teams, and ultimately faster reaction times to transient events. Compared to ground-based instruments, airborne research platforms offer superior atmospheric transmission, the mobility to reach remote and often-times otherwise unreachable locations over the Earth, and virtually-guaranteed good weather for observing the sky. Compared to space-based instruments, airborne platforms typically offer substantial cost advantages and the freedom to fly along nearly any groundtrack route for transient event tracking such as occultations and eclipses.

  11. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  12. Digitized Photography: What You Can Do with It.

    ERIC Educational Resources Information Center

    Kriss, Jack

    1997-01-01

    Discusses benefits of digital cameras which allow users to take a picture, store it on a digital disk, and manipulate/export these photos to a print document, Web page, or multimedia presentation. Details features of digital cameras and discusses educational uses. A sidebar presents prices and other information for 12 digital cameras. (AEF)

  13. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    NASA Astrophysics Data System (ADS)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

  14. New Airborne Sensors and Platforms for Solving Specific Tasks in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Kemper, G.

    2012-07-01

    A huge number of small and medium sized sensors entered the market. Today's mid format sensors reach 80 MPix and allow to run projects of medium size, comparable with the first big format digital cameras about 6 years ago. New high quality lenses and new developments in the integration prepared the market for photogrammetric work. Companies as Phase One or Hasselblad and producers or integrators as Trimble, Optec, and others utilized these cameras for professional image production. In combination with small camera stabilizers they can be used also in small aircraft and make the equipment small and easy transportable e.g. for rapid assessment purposes. The combination of different camera sensors enables multi or hyper-spectral installations e.g. useful for agricultural or environmental projects. Arrays of oblique viewing cameras are in the market as well, in many cases these are small and medium format sensors combined as rotating or shifting devices or just as a fixed setup. Beside the proper camera installation and integration, also the software that controls the hardware and guides the pilot has to solve much more tasks than a normal FMS did in the past. Small and relatively cheap Laser Scanners (e.g. Riegl) are in the market and a proper combination with MS Cameras and an integrated planning and navigation is a challenge that has been solved by different softwares. Turnkey solutions are available e.g. for monitoring power line corridors where taking images is just a part of the job. Integration of thermal camera systems with laser scanner and video capturing must be combined with specific information of the objects stored in a database and linked when approaching the navigation point.

  15. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  16. Forest Stand Canopy Structure Attribute Estimation from High Resolution Digital Airborne Imagery

    Treesearch

    Demetrios Gatziolis

    2006-01-01

    A study of forest stand canopy variable assessment using digital, airborne, multispectral imagery is presented. Variable estimation involves stem density, canopy closure, and mean crown diameter, and it is based on quantification of spatial autocorrelation among pixel digital numbers (DN) using variogram analysis and an alternative, non-parametric approach known as...

  17. Airborne camera and spectrometer experiments and data evaluation

    NASA Astrophysics Data System (ADS)

    Lehmann, F. F.; Bucher, T.; Pless, S.; Wohlfeil, J.; Hirschmüller, H.

    2009-09-01

    New stereo push broom camera systems have been developed at German Aerospace Centre (DLR). The new small multispectral systems (Multi Functional Camerahead - MFC, Advanced Multispectral Scanner - AMS) are light weight, compact and display three or five RGB stereo lines of 8000, 10 000 or 14 000 pixels, which are used for stereo processing and the generation of Digital Surface Models (DSM) and near True Orthoimage Mosaics (TOM). Simultaneous acquisition of different types of MFC-cameras for infrared and RGB data has been successfully tested. All spectral channels record the image data in full resolution, pan-sharpening is not necessary. Analogue to the line scanner data an automatic processing chain for UltraCamD and UltraCamX exists. The different systems have been flown for different types of applications; main fields of interest among others are environmental applications (flooding simulations, monitoring tasks, classification) and 3D-modelling (e.g. city mapping). From the DSM and TOM data Digital Terrain Models (DTM) and 3D city models are derived. Textures for the facades are taken from oblique orthoimages, which are created from the same input data as the TOM and the DOM. The resulting models are characterised by high geometric accuracy and the perfect fit of image data and DSM. The DLR is permanently developing and testing a wide range of sensor types and imaging platforms for terrestrial and space applications. The MFC-sensors have been flown in combination with laser systems and imaging spectrometers and special data fusion products have been developed. These products include hyperspectral orthoimages and 3D hyperspectral data.

  18. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  19. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  20. SLR digital camera for forensic photography

    NASA Astrophysics Data System (ADS)

    Har, Donghwan; Son, Youngho; Lee, Sungwon

    2004-06-01

    Forensic photography, which was systematically established in the late 19th century by Alphonse Bertillon of France, has developed a lot for about 100 years. The development will be more accelerated with the development of high technologies, in particular the digital technology. This paper reviews three studies to answer the question: Can the SLR digital camera replace the traditional silver halide type ultraviolet photography and infrared photography? 1. Comparison of relative ultraviolet and infrared sensitivity of SLR digital camera to silver halide photography. 2. How much ultraviolet or infrared sensitivity is improved when removing the UV/IR cutoff filter built in the SLR digital camera? 3. Comparison of relative sensitivity of CCD and CMOS for ultraviolet and infrared. The test result showed that the SLR digital camera has a very low sensitivity for ultraviolet and infrared. The cause was found to be the UV/IR cutoff filter mounted in front of the image sensor. Removing the UV/IR cutoff filter significantly improved the sensitivity for ultraviolet and infrared. Particularly for infrared, the sensitivity of the SLR digital camera was better than that of the silver halide film. This shows the possibility of replacing the silver halide type ultraviolet photography and infrared photography with the SLR digital camera. Thus, the SLR digital camera seems to be useful for forensic photography, which deals with a lot of ultraviolet and infrared photographs.

  1. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  2. Evaluation of modified portable digital camera for screening of diabetic retinopathy.

    PubMed

    Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi

    2009-01-01

    To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.

  3. Airborne multispectral identification of individual cotton plants using consumer-grade cameras

    USDA-ARS?s Scientific Manuscript database

    Although multispectral remote sensing using consumer-grade cameras has successfully identified fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants. The imaging sensor of consumer-grade cameras are based on a Bayer patter...

  4. Applications of Photogrammetry for Analysis of Forest Plantations. Preliminary study: Analysis of individual trees

    NASA Astrophysics Data System (ADS)

    Mora, R.; Barahona, A.; Aguilar, H.

    2015-04-01

    This paper presents a method for using high detail volumetric information, captured with a land based photogrammetric survey, to obtain information from individual trees. Applying LIDAR analysis techniques it is possible to measure diameter at breast height, height at first branch (commercial height), basal area and volume of an individual tree. Given this information it is possible to calculate how much of that tree can be exploited as wood. The main objective is to develop a methodology for successfully surveying one individual tree, capturing every side of the stem a using high resolution digital camera and reference marks with GPS coordinates. The process is executed for several individuals of two species present in the metropolitan area in San Jose, Costa Rica, Delonix regia (Bojer) Raf. and Tabebuia rosea (Bertol.) DC., each one with different height, stem shape and crown area. Using a photogrammetry suite all the pictures are aligned, geo-referenced and a dense point cloud is generated with enough detail to perform the required measurements, as well as a solid tridimensional model for volume measurement. This research will open the way to develop a capture methodology with an airborne camera using close range UAVs. An airborne platform will make possible to capture every individual in a forest plantation, furthermore if the analysis techniques applied in this research are automated it will be possible to calculate with high precision the exploit potential of a forest plantation and improve its management.

  5. Development of an airborne laser bathymeter

    NASA Technical Reports Server (NTRS)

    Kim, H., H.; Cervenka, P. O.; Lankford, C. B.

    1975-01-01

    An airborne laser depth sounding system was built and taken through a complete series of field tests. Two green laser sources were tried: a pulsed neon laser at 540 nm and a frequency-doubled Nd:YAG transmitter at 532 nm. To obtain a depth resolution of better than 20 cm, the pulses had a duration of 5 to 7 nanoseconds and could be fired up to at rates of 50 pulses per second. In the receiver, the signal was detected by a photomultiplier tube connected to a 28 cm diameter Cassegrainian telescope that was aimed vertically downward. Oscilloscopic traces of the signal reflected from the sea surface and the ocean floor could either be recorded by a movie camera on 35 mm film or digitized into 500 discrete channels of information and stored on magnetic tape, from which depth information could be extracted. An aerial color movie camera recorded the geographic footprint while a boat crew of oceanographers measured depth and other relevant water parameters. About two hundred hours of flight time on the NASA C-54 airplane in the area of Chincoteague, Virginia, the Chesapeake Bay, and in Key West, Florida, have yielded information on the actual operating conditions of such a system and helped to optimize the design. One can predict the maximum depth attainable in a mission by measuring the effective attenuation coefficient in flight. This quantity is four times smaller than the usual narrow beam attenuation coefficient. Several square miles of a varied underwater landscape were also mapped.

  6. Application of terrestrial photogrammetry for the mass balance calculation on Montasio Occidentale Glacier (Julian Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Piermattei, Livia; Carturan, Luca; Calligaro, Simone; Blasone, Giacomo; Guarnieri, Alberto; Tarolli, Paolo; Dalla Fontana, Giancarlo; Vettore, Antonio

    2014-05-01

    Digital elevation models (DEMs) of glaciated terrain are commonly used to measure changes in geometry and hence infer the mass balance of glaciers. Different tools and methods exist to obtain information about the 3D geometry of terrain. Recent improvements on the quality and performance of digital cameras for close-range photogrammetry, and the development of automatic digital photogrammetric processing makes the 'structure from motion' photogrammetric technique (SfM) competitive for high quality 3D models production, compared to efficient but also expensive and logistically-demanding survey technologies such as airborn and terrestrial laser scanner (TLS). The purpose of this work is to test the SfM approach, using a consumer-grade SLR camera and the low-cost computer vision-based software package Agisoft Photoscan (Agisoft LLC), to monitor the mass balance of Montasio Occidentale glacier, a 0.07km2, low-altitude, debris-covered glacier located in the Eastern Italian Alps. The quality of the 3D models produced by the SfM process has been assessed by comparison with digital terrain models obtained through TLS surveys carried out at the same dates. TLS technique has indeed proved to be very effective in determining the volume change of this glacier in the last years. Our results shows that the photogrammetric approach can produce point cloud densities comparable to those derived from TLS measurements. Furthermore, the horizontal and vertical accuracies are also of the same order of magnitude as for TLS (centimetric to decimetric). The effect of different landscape characteristics (e.g. distance from the camera or terrain gradient) and of different substrata (rock, debris, ice, snow and firn) was also evaluated in terms of SfM reconstruction's accuracy vs. TLS. Given the good results obtained on the Montasio Occidentale glacier, it can be concluded that the terrestrial photogrammetry, with the advantageous features of portability, ease of use and above all low costs, allows to obtain high-resolution DEMs which enable good mass balance estimations on glaciers with similar characteristics.

  7. Secure and Efficient Transmission of Hyperspectral Images for Geosciences Applications

    NASA Astrophysics Data System (ADS)

    Carpentieri, Bruno; Pizzolante, Raffaele

    2017-12-01

    Hyperspectral images are acquired through air-borne or space-borne special cameras (sensors) that collect information coming from the electromagnetic spectrum of the observed terrains. Hyperspectral remote sensing and hyperspectral images are used for a wide range of purposes: originally, they were developed for mining applications and for geology because of the capability of this kind of images to correctly identify various types of underground minerals by analysing the reflected spectrums, but their usage has spread in other application fields, such as ecology, military and surveillance, historical research and even archaeology. The large amount of data obtained by the hyperspectral sensors, the fact that these images are acquired at a high cost by air-borne sensors and that they are generally transmitted to a base, makes it necessary to provide an efficient and secure transmission protocol. In this paper, we propose a novel framework that allows secure and efficient transmission of hyperspectral images, by combining a reversible invisible watermarking scheme, used in conjunction with digital signature techniques, and a state-of-art predictive-based lossless compression algorithm.

  8. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  9. Next-generation digital camera integration and software development issues

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Peters, Ken; Hecht, Richard

    1998-04-01

    This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.

  10. Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.

    ERIC Educational Resources Information Center

    Mills, David A.; Kelley, Kevin; Jones, Michael

    2001-01-01

    Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)

  11. Digital Cameras for Student Use.

    ERIC Educational Resources Information Center

    Simpson, Carol

    1997-01-01

    Describes the features, equipment and operations of digital cameras and compares three different digital cameras for use in education. Price, technology requirements, features, transfer software, and accessories for the Kodak DC25, Olympus D-200L and Casio QV-100 are presented in a comparison table. (AEF)

  12. Evaluation of an airborne remote sensing platform consisting of two consumer-grade cameras for crop identification

    USDA-ARS?s Scientific Manuscript database

    Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications have not been well documented in related ...

  13. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  14. Highly Protable Airborne Multispectral Imaging System

    NASA Technical Reports Server (NTRS)

    Lehnemann, Robert; Mcnamee, Todd

    2001-01-01

    A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.

  15. Camera Ready: Capturing a Digital History of Chester

    ERIC Educational Resources Information Center

    Lehman, Kathy

    2008-01-01

    Armed with digital cameras, voice recorders, and movie cameras, students from Thomas Dale High School in Chester, Virginia, have been exploring neighborhoods, interviewing residents, and collecting memories of their hometown. In this article, the author describes "Digital History of Chester", a project for creating a commemorative DVD.…

  16. Molecular Shocks Associated with Massive Young Stars: CO Line Images with a New Far-Infrared Spectroscopic Camera on the Kuiper Airborne Observatory

    NASA Technical Reports Server (NTRS)

    Watson, Dan M.

    1997-01-01

    Under the terms of our contract with NASA Ames Research Center, the University of Rochester (UR) offers the following final technical report on grant NAG 2-958, Molecular shocks associated with massive young stars: CO line images with a new far-infrared spectroscopic camera, given for implementation of the UR Far-Infrared Spectroscopic Camera (FISC) on the Kuiper Airborne Observatory (KAO), and use of this camera for observations of star-formation regions 1. Two KAO flights in FY 1995, the final year of KAO operations, were awarded to this program, conditional upon a technical readiness confirmation which was given in January 1995. The funding period covered in this report is 1 October 1994 - 30 September 1996. The project was supported with $30,000, and no funds remained at the conclusion of the project.

  17. Airborne digital-image data for monitoring the Colorado River corridor below Glen Canyon Dam, Arizona, 2009 - Image-mosaic production and comparison with 2002 and 2005 image mosaics

    USGS Publications Warehouse

    Davis, Philip A.

    2012-01-01

    Airborne digital-image data were collected for the Arizona part of the Colorado River ecosystem below Glen Canyon Dam in 2009. These four-band image data are similar in wavelength band (blue, green, red, and near infrared) and spatial resolution (20 centimeters) to image collections of the river corridor in 2002 and 2005. These periodic image collections are used by the Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey to monitor the effects of Glen Canyon Dam operations on the downstream ecosystem. The 2009 collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits, unlike the image sensors that GCMRC used in 2002 and 2005. This study examined the performance of the SH52 sensor, on the basis of the collected image data, and determined that the SH52 sensor provided superior data relative to the previously employed sensors (that is, an early ADS40 model and Zeiss Imaging's Digital Mapping Camera) in terms of band-image registration, dynamic range, saturation, linearity to ground reflectance, and noise level. The 2009 image data were provided as orthorectified segments of each flightline to constrain the size of the image files; each river segment was covered by 5 to 6 overlapping, linear flightlines. Most flightline images for each river segment had some surface-smear defects and some river segments had cloud shadows, but these two conditions did not generally coincide in the majority of the overlapping flightlines for a particular river segment. Therefore, the final image mosaic for the 450-kilometer (km)-long river corridor required careful selection and editing of numerous flightline segments (a total of 513 segments, each 3.2 km long) to minimize surface defects and cloud shadows. The final image mosaic has a total of only 3 km of surface defects. The final image mosaic for the western end of the corridor has areas of cloud shadow because of persistent inclement weather during data collection. This report presents visual comparisons of the 2002, 2005, and 2009 digital-image mosaics for various physical, biological, and cultural resources within the Colorado River ecosystem. All of the comparisons show the superior quality of the 2009 image data. In fact, the 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River.

  18. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  19. Monitoring eruptive activity at Mount St. Helens with TIR image data

    USGS Publications Warehouse

    Vaughan, R.G.; Hook, S.J.; Ramsey, M.S.; Realmuto, V.J.; Schneider, D.J.

    2005-01-01

    Thermal infrared (TIR) data from the MASTER airborne imaging spectrometer were acquired over Mount St. Helens in Sept and Oct, 2004, before and after the onset of recent eruptive activity. Pre-eruption data showed no measurable increase in surface temperatures before the first phreatic eruption on Oct 1. MASTER data acquired during the initial eruptive episode on Oct 14 showed maximum temperatures of ???330??C and TIR data acquired concurrently from a Forward Looking Infrared (FLIR) camera showed maximum temperatures ???675??C, in narrow (???1-m) fractures of molten rock on a new resurgent dome. MASTER and FLIR thermal flux calculations indicated a radiative cooling rate of ???714 J/m2/S over the new dome, corresponding to a radiant power of ???24 MW. MASTER data indicated the new dome was dacitic in composition, and digital elevation data derived from LIDAR acquired concurrently with MASTER showed that the dome growth correlated with the areas of elevated temperatures. Low SO2 concentrations in the plume combined with sub-optimal viewing conditions prohibited quantitative measurement of plume SO2. The results demonstrate that airborne TIR data can provide information on the temperature of both the surface and plume and the composition of new lava during eruptive episodes. Given sufficient resources, the airborne instrumentation could be deployed rapidly to a newly-awakening volcano and provide a means for remote volcano monitoring. Copyright 2005 by the American Geophysical Union.

  20. Single chip camera active pixel sensor

    NASA Technical Reports Server (NTRS)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  1. Employing airborne multispectral digital imagery to map Brazilian pepper infestation in south Texas.

    USDA-ARS?s Scientific Manuscript database

    A study was conducted in south Texas to determine the feasibility of using airborne multispectral digital imagery for differentiating the invasive plant Brazilian pepper (Schinus terebinthifolius) from other cover types. Imagery obtained in the visible, near infrared, and mid infrared regions of th...

  2. Selecting the right digital camera for telemedicine-choice for 2009.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart; Brudzinski, Jay; Spargo, Garret

    2010-03-01

    Digital cameras are fundamental tools for store-and-forward telemedicine (electronic consultation). The choice of a camera may significantly impact this consultative process based on the quality of the images, the ability of users to leverage the cameras' features, and other facets of the camera design. The goal of this research was to provide a substantive framework and clearly defined process for reviewing digital cameras and to demonstrate the results obtained when employing this process to review point-and-shoot digital cameras introduced in 2009. The process included a market review, in-house evaluation of features, image reviews, functional testing, and feature prioritization. Seventy-two cameras were identified new on the market in 2009, and 10 were chosen for in-house evaluation. Four cameras scored very high for mechanical functionality and ease-of-use. The final analysis revealed three cameras that had excellent scores for both color accuracy and photographic detail and these represent excellent options for telemedicine: Canon Powershot SD970 IS, Fujifilm FinePix F200EXR, and Panasonic Lumix DMC-ZS3. Additional features of the Canon Powershot SD970 IS make it the camera of choice for our Alaska program.

  3. Using Digital Imaging in Classroom and Outdoor Activities.

    ERIC Educational Resources Information Center

    Thomasson, Joseph R.

    2002-01-01

    Explains how to use digital cameras and related basic equipment during indoor and outdoor activities. Uses digital imaging in general botany class to identify unknown fungus samples. Explains how to select a digital camera and other necessary equipment. (YDS)

  4. Issues in implementing services for a wireless web-enabled digital camera

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas

    2001-05-01

    The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.

  5. Voss with video camera in Service Module

    NASA Image and Video Library

    2001-04-08

    ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.

  6. Utilizing Structure-from-Motion Photogrammetry with Airborne Visual and Thermal Images to Monitor Thermal Areas in Yellowstone National Park

    NASA Astrophysics Data System (ADS)

    Carr, B. B.; Vaughan, R. G.

    2017-12-01

    The thermal areas in Yellowstone National Park (Wyoming, USA) are constantly changing. Persistent monitoring of these areas is necessary to better understand the behavior and potential hazards of both the thermal features and the deeper hydrothermal system driving the observed surface activity. As part of the Park's monitoring program, thousands of visual and thermal infrared (TIR) images have been acquired from a variety of airborne platforms over the past decade. We have used structure-from-motion (SfM) photogrammetry techniques to generate a variety of data products from these images, including orthomosaics, temperature maps, and digital elevation models (DEMs). Temperature maps were generated for Upper Geyser Basin and Norris Geyser Basin for the years 2009-2015, by applying SfM to nighttime TIR images collected from an aircraft-mounted forward-looking infrared (FLIR) camera. Temperature data were preserved through the SfM processing by applying a uniform linear stretch over the entire image set to convert between temperature and a 16-bit digital number. Mosaicked temperature maps were compared to the original FLIR image frames and to ground-based temperature data to constrain the accuracy of the method. Due to pixel averaging and resampling, among other issues, the derived temperature values are typically within 5-10 ° of the values of the un-resampled image frame. We also created sub-meter resolution DEMs from airborne daytime visual images of individual thermal areas. These DEMs can be used for resource and hazard management, and in cases where multiple DEMs exist from different times, for measuring topographic change, including change due to thermal activity. For example, we examined the sensitivity of the DEMs to topographic change by comparing DEMs of the travertine terraces at Mammoth Hot Springs, which can grow at > 1 m per year. These methods are generally applicable to images from airborne platforms, including planes, helicopters, and unmanned aerial systems, and can be used to monitor thermal areas on a variety of spatial and temporal scales.

  7. Spectral colors capture and reproduction based on digital camera

    NASA Astrophysics Data System (ADS)

    Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang

    2018-01-01

    The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.

  8. Rapid-Response or Repeat-Mode Topography from Aerial Structure from Motion

    NASA Astrophysics Data System (ADS)

    Nissen, E.; Johnson, K. L.; Fitzgerald, F. S.; Morgan, M.; White, J.

    2014-12-01

    This decade has seen a surge of interest in Structure-from-Motion (SfM) as a means of generating high-resolution topography and coregistered texture maps from stereo digital photographs. Using an unstructured set of overlapping photographs captured from multiple viewpoints and minimal GPS ground control, SfM solves simultaneously for scene topography and camera positions, orientations and lens parameters. The use of cheap unmanned aerial vehicles or tethered helium balloons as camera platforms expedites data collection and overcomes many of the cost, time and logistical limitations of LiDAR surveying, making it a potentially valuable tool for rapid response mapping and repeat monitoring applications. We begin this presentation by assessing what data resolutions and precisions are achievable using a simple aerial camera platform and commercial SfM software (we use the popular Agisoft Photoscan package). SfM point clouds generated at two small (~0.1 km2), sparsely-vegetated field sites in California compare favorably with overlapping airborne and terrestrial LiDAR surveys, with closest point distances of a few centimeters between the independent datasets. Next, we go on to explore the method in more challenging conditions, in response to a major landslide in Mesa County, Colorado, on 25th May 2014. Photographs collected from a small UAV were used to generate a high-resolution model of the 4.5 x 1 km landslide several days before an airborne LiDAR survey could be organized and flown. An initial estimate of the mass balance of the landslide could quickly be made by differencing this model against pre-event topography generated using stereo photographs collected in 2009 as part of the National Agricultural Imagery Program (NAIP). This case study therefore demonstrates the rich potential offered by this technique, as well as some of the challenges, particularly with respect to the treatment of vegetation.

  9. The Topographic Data Deluge - Collecting and Maintaining Data in a 21ST Century Mapping Agency

    NASA Astrophysics Data System (ADS)

    Holland, D. A.; Pook, C.; Capstick, D.; Hemmings, A.

    2016-06-01

    In the last few years, the number of sensors and data collection systems available to a mapping agency has grown considerably. In the field, in addition to total stations measuring position, angles and distances, the surveyor can choose from hand-held GPS devices, multi-lens imaging systems or laser scanners, which may be integrated with a laptop or tablet to capture topographic data directly in the field. These systems are joined by mobile mapping solutions, mounted on large or small vehicles, or sometimes even on a backpack carried by a surveyor walking around a site. Such systems allow the raw data to be collected rapidly in the field, while the interpretation of the data can be performed back in the office at a later date. In the air, large format digital cameras and airborne lidar sensors are being augmented with oblique camera systems, taking multiple views at each camera position and being used to create more realistic 3D city models. Lower down in the atmosphere, Unmanned Aerial Vehicles (or Remotely Piloted Aircraft Systems) have suddenly become ubiquitous. Hundreds of small companies have sprung up, providing images from UAVs using ever more capable consumer cameras. It is now easy to buy a 42 megapixel camera off the shelf at the local camera shop, and Canon recently announced that they are developing a 250 megapixel sensor for the consumer market. While these sensors may not yet rival the metric cameras used by today's photogrammetrists, the rapid developments in sensor technology could eventually lead to the commoditization of high-resolution camera systems. With data streaming in from so many sources, the main issue for a mapping agency is how to interpret, store and update the data in such a way as to enable the creation and maintenance of the end product. This might be a topographic map, ortho-image or a digital surface model today, but soon it is just as likely to be a 3D point cloud, textured 3D mesh, 3D city model, or Building Information Model (BIM) with all the data interpretation and modelling that entails. In this paper, we describe research/investigations into the developing technologies and outline the findings for a National Mapping Agency (NMA). We also look at the challenges that these new data collection systems will bring to an NMA, and suggest ways that we may work to meet these challenges and deliver the products desired by our users.

  10. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  11. Efficient Feature Extraction and Likelihood Fusion for Vehicle Tracking in Low Frame Rate Airborne Video

    DTIC Science & Technology

    2010-07-01

    imagery, persistent sensor array I. Introduction New device fabrication technologies and heterogeneous embedded processors have led to the emergence of a...geometric occlusions between target and sensor , motion blur, urban scene complexity, and high data volumes. In practical terms the targets are small...distributed airborne narrow-field-of-view video sensor networks. Airborne camera arrays combined with com- putational photography techniques enable the

  12. An evaluation of onshore digital elevation models for tsunami inundation modelling

    NASA Astrophysics Data System (ADS)

    Griffin, J.; Latief, H.; Kongko, W.; Harig, S.; Horspool, N.; Hanung, R.; Rojali, A.; Maher, N.; Fountain, L.; Fuchs, A.; Hossen, J.; Upi, S.; Dewanto, S. E.; Cummins, P. R.

    2012-12-01

    Tsunami inundation models provide fundamental information about coastal areas that may be inundated in the event of a tsunami along with additional parameters such as flow depth and velocity. This can inform disaster management activities including evacuation planning, impact and risk assessment and coastal engineering. A fundamental input to tsunami inundation models is adigital elevation model (DEM). Onshore DEMs vary widely in resolution, accuracy, availability and cost. A proper assessment of how the accuracy and resolution of DEMs translates into uncertainties in modelled inundation is needed to ensure results are appropriately interpreted and used. This assessment can in turn informdata acquisition strategies depending on the purpose of the inundation model. For example, lower accuracy elevation data may give inundation results that are sufficiently accurate to plan a community's evacuation route but not sufficient to inform engineering of a vertical evacuation shelters. A sensitivity study is undertaken to assess the utility of different available onshore digital elevation models for tsunami inundation modelling. We compare airborne interferometric synthetic aperture radar (IFSAR), ASTER and SRTM against high resolution (<1 m horizontal resolution, < 0.15 m vertical accuracy) LiDAR or stereo-camera data in three Indonesian locations with different coastal morphologies (Padang, West Sumatra; Palu, Central Sulawesi; and Maumere, Flores), using three different computational codes (ANUGA, TUNAMI-N3 and TsunAWI). Tsunami inundation extents modelled with IFSAR are comparable with those modelled with the high resolution datasets and with historical tsunami run-up data. Large vertical errors (> 10 m) and poor resolution of the coastline in the ASTER and SRTM elevation models cause modelled inundation to be much less compared with models using better data and with observations. Therefore we recommend that ASTER and SRTM should not be used for modelling tsunami inundation in order to determine tsunami extent or any other measure of onshore tsunami hazard. We suggest that for certain disaster management applications where the important factor is the extent of inundation, such as evacuation planning, airborne IFSAR provides a good compromise between cost and accuracy; however the representation of flow parameters such as depth and velocity is not sufficient to inform detailed engineering of structures. Differences in modelled inundation extent between digital terrain models (DTM) and digital surface models (DSM) for LiDAR, high resolution stereo-camera and airborne IFSAR data are greater than differences between the data types. The presence of trees and buildings as solid elevation in the DSM leads to underestimated inundation extents compared with observations, while removal of these features in the DTM causes more extensive inundation. Further work is needed to resolve whether DTM or DSM should be used and, in particular for DTM, how and at what spatial scale roughness should be parameterized to appropriately account for the presence of buildings and vegetation. We also test model mesh resolutions up to 0.8 m but find that there are only negligible changes in inundation extent between 0.8 and 25 m mesh resolution, even using the highest resolution elevation data.

  13. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters

    PubMed Central

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-01-01

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264

  14. A low-cost single-camera imaging system for aerial applicators

    USDA-ARS?s Scientific Manuscript database

    Agricultural aircraft provide a readily available and versatile platform for airborne remote sensing. Although various airborne imaging systems are available, most of these systems are either too expensive or too complex to be of practical use for aerial applicators. The objective of this study was ...

  15. Processor architecture for airborne SAR systems

    NASA Technical Reports Server (NTRS)

    Glass, C. M.

    1983-01-01

    Digital processors for spaceborne imaging radars and application of the technology developed for airborne SAR systems are considered. Transferring algorithms and implementation techniques from airborne to spaceborne SAR processors offers obvious advantages. The following topics are discussed: (1) a quantification of the differences in processing algorithms for airborne and spaceborne SARs; and (2) an overview of three processors for airborne SAR systems.

  16. Center for Coastline Security Technology, Year 3

    DTIC Science & Technology

    2008-05-01

    Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection

  17. Digital control of the Kuiper Airborne Observatory telescope

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann C.; Snyder, Philip K.

    1989-01-01

    The feasibility of using a digital controller to stabilize a telescope mounted in an airplane is investigated. The telescope is a 30 in. infrared telescope mounted aboard a NASA C-141 aircraft known as the Kuiper Airborne Observatory. Current efforts to refurbish the 14-year-old compensation system have led to considering a digital controller. A typical digital controller is modeled and added into the telescope system model. This model is simulated on a computer to generate the Bode plots and time responses which determine system stability and performance parameters. Important aspects of digital control system hardware are discussed. A summary of the findings shows that a digital control system would result in satisfactory telescope performance.

  18. ASPIRE - Airborne Spectro-Polarization InfraRed Experiment

    NASA Astrophysics Data System (ADS)

    DeLuca, E.; Cheimets, P.; Golub, L.; Madsen, C. A.; Marquez, V.; Bryans, P.; Judge, P. G.; Lussier, L.; McIntosh, S. W.; Tomczyk, S.

    2017-12-01

    Direct measurements of coronal magnetic fields are critical for taking the next step in active region and solar wind modeling and for building the next generation of physics-based space-weather models. We are proposing a new airborne instrument to make these key observations. Building on the successful Airborne InfraRed Spectrograph (AIR-Spec) experiment for the 2017 eclipse, we will design and build a spectro-polarimeter to measure coronal magnetic field during the 2019 South Pacific eclipse. The new instrument will use the AIR-Spec optical bench and the proven pointing, tracking, and stabilization optics. A new cryogenic spectro-polarimeter will be built focusing on the strongest emission lines observed during the eclipse. The AIR-Spec IR camera, slit jaw camera and data acquisition system will all be reused. The poster will outline the optical design and the science goals for ASPIRE.

  19. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  20. Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.

    PubMed

    Porch, Timothy G; Erpelding, John E

    2006-04-30

    A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.

  1. Imagers for digital still photography

    NASA Astrophysics Data System (ADS)

    Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge

    2006-04-01

    This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.

  2. The use of a modified technique to reduce radioactive air contamination in aerosol lung ventilation imaging.

    PubMed

    Avison, M; Hart, G

    2001-06-01

    The aim of this study was to reduce airborne contamination resulting from the use of aerosols in lung ventilation scintigraphy. Lung ventilation imaging is frequently performed with 99mTc-diethylenetriaminepentaacetate aerosol (DTPA), derived from a commercial nebuliser. Airborne contamination is a significant problem with this procedure; it results in exposure of staff to radiation and can reduce gamma camera performance when the ventilation is performed in the camera room. We examined the level of airborne contamination resulting from the standard technique with one of the most popular nebuliser kits and tested a modification which significantly reduced airborne contamination. Air contamination was measured while ventilating 122 patients. The modified technique reduced air contamination by a mean value of 64% (p = 0.028) compared with the standard control technique. Additionally, differences in contamination were examined when a mask or mouthpiece was used as well as differences between operators. A simplified method of monitoring air contamination is presented using a commonly available surface contamination monitor. The index so derived was proportional to air contamination (r = 0.88). The problems and regulations associated with airborne contamination are discussed.

  3. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  4. Comparison of mosaicking techniques for airborne images from consumer-grade cameras

    USDA-ARS?s Scientific Manuscript database

    Images captured from airborne imaging systems have the advantages of relatively low cost, high spatial resolution, and real/near-real-time availability. Multiple images taken from one or more flight lines could be used to generate a high-resolution mosaic image, which could be useful for diverse rem...

  5. Airborne and Ground-Based Platforms for Data Collection in Small Vineyards: Examples from the UK and Switzerland

    NASA Astrophysics Data System (ADS)

    Green, David R.; Gómez, Cristina; Fahrentrapp, Johannes

    2015-04-01

    This paper presents an overview of some of the low-cost ground and airborne platforms and technologies now becoming available for data collection in small area vineyards. Low-cost UAV or UAS platforms and cameras are now widely available as the means to collect both vertical and oblique aerial still photography and airborne videography in vineyards. Examples of small aerial platforms include the AR Parrot Drone, the DJI Phantom (1 and 2), and 3D Robotics IRIS+. Both fixed-wing and rotary wings platforms offer numerous advantages for aerial image acquisition including the freedom to obtain high resolution imagery at any time required. Imagery captured can be stored on mobile devices such as an Apple iPad and shared, written directly to a memory stick or card, or saved to the Cloud. The imagery can either be visually interpreted or subjected to semi-automated analysis using digital image processing (DIP) software to extract information about vine status or the vineyard environment. At the ground-level, a radio-controlled 'rugged' model 4x4 vehicle can also be used as a mobile platform to carry a number of sensors (e.g. a Go-Pro camera) around a vineyard, thereby facilitating quick and easy field data collection from both within the vine canopy and rows. For the small vineyard owner/manager with limited financial resources, this technology has a number of distinct advantages to aid in vineyard management practices: it is relatively cheap to purchase; requires a short learning-curve to use and to master; can make use of autonomous ground control units for repetitive coverage enabling reliable monitoring; and information can easily be analysed and integrated within a GIS with minimal expertise. In addition, these platforms make widespread use of familiar and everyday, off-the-shelf technologies such as WiFi, Go-Pro cameras, Cloud computing, and smartphones or tablets as the control interface, all with a large and well established end-user support base. Whilst there are still some limitations which constrain their use, including battery power and flight time, data connectivity, and payload capacity, such platforms nevertheless offer quick, low-cost, easy, and repeatable ways to capture valuable contextual data for small vineyards, complementing other sources of data used in Precision Viticulture (PV) and vineyard management. As these technologies continue to evolve very quickly, and more lightweight sensors become available for the smaller ground and airborne platforms, this will offer even more possibilities for a wider range of information to be acquired to aid in the monitoring, mapping, and management of small vineyards. The paper is illustrated with some examples from the UK and Switzerland.

  6. Automatic source camera identification using the intrinsic lens radial distortion

    NASA Astrophysics Data System (ADS)

    Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.

    2006-11-01

    Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

  7. Characterizing Urban Volumetry Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Santos, T.; Rodrigues, A. M.; Tenedório, J. A.

    2013-05-01

    Urban indicators are efficient tools designed to simplify, quantify and communicate relevant information for land planners. Since urban data has a strong spatial representation, one can use geographical data as the basis for constructing information regarding urban environments. One important source of information about the land status is imagery collected through remote sensing. Afterwards, using digital image processing techniques, thematic detail can be extracted from those images and used to build urban indicators. Most common metrics are based on area (2D) measurements. These include indicators like impervious area per capita or surface occupied by green areas, having usually as primary source a spectral image obtained through a satellite or airborne camera. More recently, laser scanning data has become available for large-scale applications. Such sensors acquire altimetric information and are used to produce Digital Surface Models (DSM). In this context, LiDAR data available for the city is explored along with demographic information, and a framework to produce volumetric (3D) urban indexes is proposed, and measures like Built Volume per capita, Volumetric Density and Volumetric Homogeneity are computed.

  8. Making Connections with Digital Data

    ERIC Educational Resources Information Center

    Leonard, William; Bassett, Rick; Clinger, Alicia; Edmondson, Elizabeth; Horton, Robert

    2004-01-01

    State-of-the-art digital cameras open up enormous possibilities in the science classroom, especially when used as data collectors. Because most high school students are not fully formal thinkers, the digital camera can provide a much richer learning experience than traditional observation. Data taken through digital images can make the…

  9. Development of digital shade guides for color assessment using a digital camera with ring flashes.

    PubMed

    Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan

    2011-02-01

    Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.

  10. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  11. Current Status and Future Plans of the NEON Airborne Observation Platform (AOP): Data Products, Observatory Requirements and Opportunities for the Community

    NASA Astrophysics Data System (ADS)

    Petroy, S. B.; Leisso, N.; Goulden, T.; Gulbransen, T.

    2016-12-01

    The National Ecological Observatory Network (NEON) is a continental-scale ecological observation platform designed to collect and disseminate data that contributes to understanding and forecasting the impacts of climate change, land use change, and invasive species on ecology. NEON will collect in-situ and airborne data over 81 sites across the US, including Alaska, Hawaii, and Puerto Rico. The Airborne Observation Platform (AOP) group within the NEON project operates a payload suite that includes a waveform LiDAR, imaging spectrometer (NIS) and high resolution RGB camera. Data from this sensor suite will be collected annually over each site and processed into a set of standard data products, generally following the processing levels used by NASA (Level 1 through Level 3). We will present a summary of the first operational flight campaign (2016), where AOP flew 42 of the 81 planned NEON sites, our operational plans for 2017, and how we will ramp up to full operations by 2018. We will also describe the final set of AOP data products to be delivered as part of NEON construction and those field (observational) data products collected concurrently on the ground, that may be used to support validation efforts of algorithms for deriving vegetation characteristics from airborne data (e.g. Plant foliar physical/chemical properties, Digital Hemispherical Photos, Plant Diversity, etc.). Opportunities for future enhancements to data products or algorithms will be facilitated via NEON's cyberinfrastructure, which is designed to support wrapping/integration of externally-developed code. And finally, we will present NEON's plans for the third AOP Sensor Suite as an assignable asset and the intent of NSF to provide research opportunities to the community for developing higher level AOP data products that were removed from the NEON project in 2015.

  12. Computerized digital dermoscopy.

    PubMed

    Gewirtzman, A J; Braun, R P

    2003-01-01

    Within the past 15 years, dermoscopy has become a widely used non-invasive technique for physicians to better visualize pigmented lesions. Dermoscopy has helped trained physicians to better diagnose pigmented lesions. Now, the digital revolution is beginning to enhance standard dermoscopic procedures. Using digital dermoscopy, physicians are better able to document pigmented lesions for patient follow-up and to get second opinions, either through teledermoscopy with an expert colleague or by using computer-assisted diagnosis. As the market for digital dermoscopy products begins to grow, so do the number of decisions physicians need to make when choosing a system to fit their needs. The current market for digital dermoscopy includes two varieties of relatively simple and cheap attachments which can convert a consumer digital camera into a digital dermoscope. A coupling adapter acts as a fastener between the camera and an ordinary dermoscope, whereas a dermoscopy attachment includes the dermoscope optics and light source and can be attached directly to the camera. Other options for digital dermoscopy include complete dermoscopy systems that use a hand-held video camera linked directly to a computer. These systems differ from each other in whether or not they are calibrated as well as the quality of the camera and software interface. Another option in digital skin imaging involves spectral analysis rather than dermoscopy. This article serves as a guide to the current systems available and their capabilities.

  13. Optimization of digitization procedures in cultural heritage preservation

    NASA Astrophysics Data System (ADS)

    Martínez, Bea; Mitjà, Carles; Escofet, Jaume

    2013-11-01

    The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.

  14. Airborne Optical and Thermal Remote Sensing for Wildfire Detection and Monitoring.

    PubMed

    Allison, Robert S; Johnston, Joshua M; Craig, Gregory; Jennings, Sion

    2016-08-18

    For decades detection and monitoring of forest and other wildland fires has relied heavily on aircraft (and satellites). Technical advances and improved affordability of both sensors and sensor platforms promise to revolutionize the way aircraft detect, monitor and help suppress wildfires. Sensor systems like hyperspectral cameras, image intensifiers and thermal cameras that have previously been limited in use due to cost or technology considerations are now becoming widely available and affordable. Similarly, new airborne sensor platforms, particularly small, unmanned aircraft or drones, are enabling new applications for airborne fire sensing. In this review we outline the state of the art in direct, semi-automated and automated fire detection from both manned and unmanned aerial platforms. We discuss the operational constraints and opportunities provided by these sensor systems including a discussion of the objective evaluation of these systems in a realistic context.

  15. Airborne Optical and Thermal Remote Sensing for Wildfire Detection and Monitoring

    PubMed Central

    Allison, Robert S.; Johnston, Joshua M.; Craig, Gregory; Jennings, Sion

    2016-01-01

    For decades detection and monitoring of forest and other wildland fires has relied heavily on aircraft (and satellites). Technical advances and improved affordability of both sensors and sensor platforms promise to revolutionize the way aircraft detect, monitor and help suppress wildfires. Sensor systems like hyperspectral cameras, image intensifiers and thermal cameras that have previously been limited in use due to cost or technology considerations are now becoming widely available and affordable. Similarly, new airborne sensor platforms, particularly small, unmanned aircraft or drones, are enabling new applications for airborne fire sensing. In this review we outline the state of the art in direct, semi-automated and automated fire detection from both manned and unmanned aerial platforms. We discuss the operational constraints and opportunities provided by these sensor systems including a discussion of the objective evaluation of these systems in a realistic context. PMID:27548174

  16. Photometry of Galactic and Extragalactic Far-Infrared Sources using the 91.5 cm Airborne Infrared Telescope

    NASA Technical Reports Server (NTRS)

    Harper, D. A.

    1996-01-01

    The objective of this grant was to construct a series of far infrared photometers, cameras, and supporting systems for use in astronomical observations in the Kuiper Airborne Observatory. The observations have included studies of galaxies, star formation regions, and objects within the Solar System.

  17. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    PubMed Central

    Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter

    2017-01-01

    Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038

  18. Improved Airborne System for Sensing Wildfires

    NASA Technical Reports Server (NTRS)

    McKeown, Donald; Richardson, Michael

    2008-01-01

    The Wildfire Airborne Sensing Program (WASP) is engaged in a continuing effort to develop an improved airborne instrumentation system for sensing wildfires. The system could also be used for other aerial-imaging applications, including mapping and military surveillance. Unlike prior airborne fire-detection instrumentation systems, the WASP system would not be based on custom-made multispectral line scanners and associated custom- made complex optomechanical servomechanisms, sensors, readout circuitry, and packaging. Instead, the WASP system would be based on commercial off-the-shelf (COTS) equipment that would include (1) three or four electronic cameras (one for each of three or four wavelength bands) instead of a multispectral line scanner; (2) all associated drive and readout electronics; (3) a camera-pointing gimbal; (4) an inertial measurement unit (IMU) and a Global Positioning System (GPS) receiver for measuring the position, velocity, and orientation of the aircraft; and (5) a data-acquisition subsystem. It would be necessary to custom-develop an integrated sensor optical-bench assembly, a sensor-management subsystem, and software. The use of mostly COTS equipment is intended to reduce development time and cost, relative to those of prior systems.

  19. A Cryogenic, Insulating Suspension System for the High Resolution Airborne Wideband Camera (HAWC)and Submillemeter And Far Infrared Experiment (SAFIRE) Adiabatic Demagnetization Refrigerators (ADRs)

    NASA Technical Reports Server (NTRS)

    Voellmer, George M.; Jackson, Michael L.; Shirron, Peter J.; Tuttle, James G.

    2002-01-01

    The High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter And Far Infrared Experiment (SAFIRE) will use identical Adiabatic Demagnetization Refrigerators (ADR) to cool their detectors to 200mK and 100mK, respectively. In order to minimize thermal loads on the salt pill, a Kevlar suspension system is used to hold it in place. An innovative, kinematic suspension system is presented. The suspension system is unique in that it consists of two parts that can be assembled and tensioned offline, and later bolted onto the salt pill.

  20. Monitoring Eruptive Activity at Mount St. Helens with TIR Image Data

    NASA Technical Reports Server (NTRS)

    Vaughan, R. G.; Hook, S. J.; Ramsey, M. S.; Realmuto, V. J.; Schneider, D. J.

    2005-01-01

    Thermal infrared (TIR) data from the MASTER airborne imaging spectrometer were acquired over Mount St. Helens in Sept and Oct, 2004, before and after the onset of recent eruptive activity. Pre-eruption data showed no measurable increase in surface temperatures before the first phreatic eruption on Oct 1. MASTER data acquired during the initial eruptive episode on Oct 14 showed maximum temperatures of similar to approximately 330 C and TIR data acquired concurrently from a Forward Looking Infrared (FLIR) camera showed maximum temperatures similar to approximately 675 C, in narrow (approximately 1-m) fractures of molten rock on a new resurgent dome. MASTER and FLIR thermal flux calculations indicated a radiative cooling rate of approximately 714 J/m(exp 2)/s over the new dome, corresponding to a radiant power of approximately 24 MW. MASTER data indicated the new dome was dacitic in composition, and digital elevation data derived from LIDAR acquired concurrently with MASTER showed that the dome growth correlated with the areas of elevated temperatures. Low SO2 concentrations in the plume combined with sub-optimal viewing conditions prohibited quantitative measurement of plume SO2. The results demonstrate that airborne TIR data can provide information on the temperature of both the surface and plume and the composition of new lava during eruptive episodes. Given sufficient resources, the airborne instrumentation could be deployed rapidly to a newly-awakening volcano and provide a means for remote volcano monitoring.

  1. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  2. Satellite land remote sensing advancements for the eighties; Proceedings of the Eighth Pecora Symposium, Sioux Falls, SD, October 4-7, 1983

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Among the topics discussed are NASA's land remote sensing plans for the 1980s, the evolution of Landsat 4 and the performance of its sensors, the Landsat 4 thematic mapper image processing system radiometric and geometric characteristics, data quality, image data radiometric analysis and spectral/stratigraphic analysis, and thematic mapper agricultural, forest resource and geological applications. Also covered are geologic applications of side-looking airborne radar, digital image processing, the large format camera, the RADARSAT program, the SPOT 1 system's program status, distribution plans, and simulation program, Space Shuttle multispectral linear array studies of the optical and biological properties of terrestrial land cover, orbital surveys of solar-stimulated luminescence, the Space Shuttle imaging radar research facility, and Space Shuttle-based polar ice sounding altimetry.

  3. Low-cost digital dynamic visualization system

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    1995-05-01

    High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.

  4. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  5. Aspects of detection and tracking of ground targets from an airborne EO/IR sensor

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Daya, Zahir; Kirubarajan, Thiagalingam

    2015-05-01

    An airborne EO/IR (electro-optical/infrared) camera system comprises of a suite of sensors, such as a narrow and wide field of view (FOV) EO and mid-wave IR sensors. EO/IR camera systems are regularly employed on military and search and rescue aircrafts. The EO/IR system can be used to detect and identify objects rapidly in daylight and at night, often with superior performance in challenging conditions such as fog. There exist several algorithms for detecting potential targets in the bearing elevation grid. The nonlinear filtering problem is one of estimation of the kinematic parameters from bearing and elevation measurements from a moving platform. In this paper, we developed a complete model for the state of a target as detected by an airborne EO/IR system and simulated a typical scenario with single target with 1 or 2 airborne sensors. We have demonstrated the ability to track the target with `high precision' and noted the improvement from using two sensors on a single platform or on separate platforms. The performance of the Extended Kalman filter (EKF) is investigated on simulated data. Image/video data collected from an IR sensor on an airborne platform are processed using an image tracking by detection algorithm.

  6. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  7. Adaptation of the Camera Link Interface for Flight-Instrument Applications

    NASA Technical Reports Server (NTRS)

    Randall, David P.; Mahoney, John C.

    2010-01-01

    COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.

  8. PtSi gimbal-based FLIR for airborne applications

    NASA Astrophysics Data System (ADS)

    Wallace, Joseph; Ornstein, Itzhak; Nezri, M.; Fryd, Y.; Bloomberg, Steve; Beem, S.; Bibi, B.; Hem, S.; Perna, Steve N.; Tower, John R.; Lang, Frank B.; Villani, Thomas S.; McCarthy, D. R.; Stabile, Paul J.

    1997-08-01

    A new gimbal-based, FLIR camera for several types of airborne platforms has been developed. The FLIR is based on a PtSi on silicon technology: developed for high volume and minimum cost. The gimbal scans an area of 360 degrees in azimuth and an elevation range of plus 15 degrees to minus 105 degrees. It is stabilized to 25 (mu) Rad-rms. A combination of uniformity correction, defect substitution, and compact optics results in a long range, low cost FLIR for all low-speed airborne platforms.

  9. Corn and sorghum phenotyping using a fixed-wing UAV-based remote sensing system

    NASA Astrophysics Data System (ADS)

    Shi, Yeyin; Murray, Seth C.; Rooney, William L.; Valasek, John; Olsenholler, Jeff; Pugh, N. Ace; Henrickson, James; Bowden, Ezekiel; Zhang, Dongyan; Thomasson, J. Alex

    2016-05-01

    Recent development of unmanned aerial systems has created opportunities in automation of field-based high-throughput phenotyping by lowering flight operational cost and complexity and allowing flexible re-visit time and higher image resolution than satellite or manned airborne remote sensing. In this study, flights were conducted over corn and sorghum breeding trials in College Station, Texas, with a fixed-wing unmanned aerial vehicle (UAV) carrying two multispectral cameras and a high-resolution digital camera. The objectives were to establish the workflow and investigate the ability of UAV-based remote sensing for automating data collection of plant traits to develop genetic and physiological models. Most important among these traits were plant height and number of plants which are currently manually collected with high labor costs. Vegetation indices were calculated for each breeding cultivar from mosaicked and radiometrically calibrated multi-band imagery in order to be correlated with ground-measured plant heights, populations and yield across high genetic-diversity breeding cultivars. Growth curves were profiled with the aerial measured time-series height and vegetation index data. The next step of this study will be to investigate the correlations between aerial measurements and ground truth measured manually in field and from lab tests.

  10. Conceptual design of the CZMIL data acquisition system (DAS): integrating a new bathymetric lidar with a commercial spectrometer and metric camera for coastal mapping applications

    NASA Astrophysics Data System (ADS)

    Fuchs, Eran; Tuell, Grady

    2010-04-01

    The CZMIL system is a new generation airborne bathymetric and topographic remote sensing platform composed of an active lidar, passive hyperspectral imager, high resolution frame camera, navigation system, and storage media running on a linux-based Gigabit Ethernet network. The lidar is a hybrid scanned-flash system employing a 10 KHz green laser and novel circular scanner, with a large aperture receiver (0.20m) having multiple channels. A PMT-based segmented detector is used on one channel to support simultaneous topographic and bathymetric data collection, and multiple fields-of- view are measured to support bathymetric measurements. The measured laser returns are digitized at 1 GHz to produce the waveforms required for ranging measurements, and unique data compression and storage techniques are used to address the large data volume. Simulated results demonstrate CZMIL's capability to discriminate bottom and surface returns in very shallow water conditions without compromising performance in deep water. Simulated waveforms are compared with measured data from the SHOALS system and show promising expected results. The system's prototype is expected to be completed by end of 2010, and ready for initial calibration tests in the spring of 2010.

  11. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  12. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    PubMed

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  13. Can light-field photography ease focusing on the scalp and oral cavity?

    PubMed

    Taheri, Arash; Feldman, Steven R

    2013-08-01

    Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  15. Networked Airborne Communications Using Adaptive Multi Beam Directional Links

    DTIC Science & Technology

    2016-03-05

    Networked Airborne Communications Using Adaptive Multi-Beam Directional Links R. Bruce MacLeod Member, IEEE, and Adam Margetts Member, IEEE MIT...provide new techniques for increasing throughput in airborne adaptive directional net- works. By adaptive directional linking, we mean systems that can...techniques can dramatically increase the capacity in airborne networks. Advances in digital array technology are beginning to put these gains within reach

  16. Measuring Distances Using Digital Cameras

    ERIC Educational Resources Information Center

    Kendal, Dave

    2007-01-01

    This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…

  17. Camera! Action! Collaborate with Digital Moviemaking

    ERIC Educational Resources Information Center

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  18. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    NASA Astrophysics Data System (ADS)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.

  19. High Scalability Video ISR Exploitation

    DTIC Science & Technology

    2012-10-01

    Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown

  20. Materials to enable vehicle and personnel identification from surveillance aircraft equipped with visible and IR cameras

    NASA Astrophysics Data System (ADS)

    O'Keefe, Eoin S.

    2005-10-01

    As thermal imaging technology matures and ownership costs decrease, there is a trend to equip a greater proportion of airborne surveillance vehicles used by security and defence forces with both visible band and thermal infrared cameras. These cameras are used for tracking vehicles on the ground, to aid in pursuit of villains in vehicles and on foot, while also assisting in the direction and co-ordination of emergency service vehicles as the occasion arises. These functions rely on unambiguous identification of police and the other emergency service vehicles. In the visible band this is achieved by dark markings with high contrast (light) backgrounds on the roof of vehicles. When there is no ambient lighting, for example at night, thermal imaging is used to track both vehicles and people. In the thermal IR, the visible markings are not obvious. At the wavelength thermal imagers operate, either 3-5 microns or 8-12 microns, the dark and light coloured materials have similar low reflectivity. To maximise the usefulness of IR airborne surveillance, a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared is needed. In this paper we discuss the design, application and operation of some vehicle and personnel marking materials and show airborne IR and visible imagery of materials in use.

  1. Organize Your Digital Photos: Display Your Images Without Hogging Hard-Disk Space

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    According to InfoTrends/CAP Ventures, by the end of this year more than 55 percent of all U.S. households will own at least one digital camera. With so many digital cameras in use, it is important for people to understand how to organize and store digital images in ways that make them easy to find. Additionally, today's affordable, large megapixel…

  2. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  3. Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation

    NASA Technical Reports Server (NTRS)

    Lee, George

    1992-01-01

    A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.

  4. A Simple Spectrophotometer Using Common Materials and a Digital Camera

    ERIC Educational Resources Information Center

    Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal

    2011-01-01

    A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…

  5. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  6. Quantifying plant colour and colour difference as perceived by humans using digital images.

    PubMed

    Kendal, Dave; Hauser, Cindy E; Garrard, Georgia E; Jellinek, Sacha; Giljohann, Katherine M; Moore, Joslin L

    2013-01-01

    Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management.

  7. Quantifying Plant Colour and Colour Difference as Perceived by Humans Using Digital Images

    PubMed Central

    Kendal, Dave; Hauser, Cindy E.; Garrard, Georgia E.; Jellinek, Sacha; Giljohann, Katherine M.; Moore, Joslin L.

    2013-01-01

    Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management. PMID:23977275

  8. Point Cloud Generation from sUAS-Mounted iPhone Imagery: Performance Analysis

    NASA Astrophysics Data System (ADS)

    Ladai, A. D.; Miller, J.

    2014-11-01

    The rapidly growing use of sUAS technology and fast sensor developments continuously inspire mapping professionals to experiment with low-cost airborne systems. Smartphones has all the sensors used in modern airborne surveying systems, including GPS, IMU, camera, etc. Of course, the performance level of the sensors differs by orders, yet it is intriguing to assess the potential of using inexpensive sensors installed on sUAS systems for topographic applications. This paper focuses on the quality analysis of point clouds generated based on overlapping images acquired by an iPhone 5s mounted on a sUAS platform. To support the investigation, test data was acquired over an area with complex topography and varying vegetation. In addition, extensive ground control, including GCPs and transects were collected with GSP and traditional geodetic surveying methods. The statistical and visual analysis is based on a comparison of the UAS data and reference dataset. The results with the evaluation provide a realistic measure of data acquisition system performance. The paper also gives a recommendation for data processing workflow to achieve the best quality of the final products: the digital terrain model and orthophoto mosaic. After a successful data collection the main question is always the reliability and the accuracy of the georeferenced data.

  9. Generation of topographic terrain models utilizing synthetic aperture radar and surface level data

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc L. (Inventor)

    1991-01-01

    Topographical terrain models are generated by digitally delineating the boundary of the region under investigation from the data obtained from an airborne synthetic aperture radar image and surface elevation data concurrently acquired either from an airborne instrument or at ground level. A set of coregistered boundary maps thus generated are then digitally combined in three dimensional space with the acquired surface elevation data by means of image processing software stored in a digital computer. The method is particularly applicable for generating terrain models of flooded regions covered entirely or in part by foliage.

  10. Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras

    DTIC Science & Technology

    1990-04-01

    poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital

  11. A Picture is Worth a Thousand Words

    ERIC Educational Resources Information Center

    Davison, Sarah

    2009-01-01

    Lions, tigers, and bears, oh my! Digital cameras, young inquisitive scientists, give it a try! In this project, students create an open-ended question for investigation, capture and record their observations--data--with digital cameras, and create a digital story to share their findings. The project follows a 5E learning cycle--Engage, Explore,…

  12. Software Graphical User Interface For Analysis Of Images

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn

    1992-01-01

    CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.

  13. Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean

    PubMed Central

    Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave

    2009-01-01

    Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729

  14. Using DSLR cameras in digital holography

    NASA Astrophysics Data System (ADS)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  15. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  16. American Society for Photogrammetry and Remote Sensing and ACSM, Fall Convention, Reno, NV, Oct. 4-9, 1987, ASPRS Technical Papers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    Recent advances in remote-sensing technology and applications are examined in reviews and reports. Topics addressed include the use of Landsat TM data to assess suspended-sediment dispersion in a coastal lagoon, the use of sun incidence angle and IR reflectance levels in mapping old-growth coniferous forests, information-management systems, Large-Format-Camera soil mapping, and the economic potential of Landsat TM winter-wheat crop-condition assessment. Consideration is given to measurement of ephemeral gully erosion by airborne laser ranging, the creation of a multipurpose cadaster, high-resolution remote sensing and the news media, the role of vegetation in the global carbon cycle, PC applications in analytical photogrammetry,more » multispectral geological remote sensing of a suspected impact crater, fractional calculus in digital terrain modeling, and automated mapping using GP-based survey data.« less

  17. Comparative Accuracy Evaluation of Fine-Scale Global and Local Digital Surface Models: The Tshwane Case Study I

    NASA Astrophysics Data System (ADS)

    Breytenbach, A.

    2016-10-01

    Conducted in the City of Tshwane, South Africa, this study set about to test the accuracy of DSMs derived from different remotely sensed data locally. VHR digital mapping camera stereo-pairs, tri-stereo imagery collected by a Pléiades satellite and data detected from the Tandem-X InSAR satellite configuration were fundamental in the construction of seamless DSM products at different postings, namely 2 m, 4 m and 12 m. The three DSMs were sampled against independent control points originating from validated airborne LiDAR data. The reference surfaces were derived from the same dense point cloud at grid resolutions corresponding to those of the samples. The absolute and relative positional accuracies were computed using well-known DEM error metrics and accuracy statistics. Overall vertical accuracies were also assessed and compared across seven slope classes and nine primary land cover classes. Although all three DSMs displayed significantly more vertical errors where solid waterbodies, dense natural and/or alien woody vegetation and, in a lesser degree, urban residential areas with significant canopy cover were encountered, all three surpassed their expected positional accuracies overall.

  18. Remote sensing of deep hermatypic coral reefs in Puerto Rico and the U.S. Virgin Islands using the Seabed autonomous underwater vehicle

    NASA Astrophysics Data System (ADS)

    Armstrong, Roy A.; Singh, Hanumant

    2006-09-01

    Optical imaging of coral reefs and other benthic communities present below one attenuation depth, the limit of effective airborne and satellite remote sensing, requires the use of in situ platforms such as autonomous underwater vehicles (AUVs). The Seabed AUV, which was designed for high-resolution underwater optical and acoustic imaging, was used to characterize several deep insular shelf reefs of Puerto Rico and the US Virgin Islands using digital imagery. The digital photo transects obtained by the Seabed AUV provided quantitative data on living coral, sponge, gorgonian, and macroalgal cover as well as coral species richness and diversity. Rugosity, an index of structural complexity, was derived from the pencil-beam acoustic data. The AUV benthic assessments could provide the required information for selecting unique areas of high coral cover, biodiversity and structural complexity for habitat protection and ecosystem-based management. Data from Seabed sensors and related imaging technologies are being used to conduct multi-beam sonar surveys, 3-D image reconstruction from a single camera, photo mosaicking, image based navigation, and multi-sensor fusion of acoustic and optical data.

  19. A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.

    1989-01-01

    Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.

  20. It's not the pixel count, you fool

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2012-01-01

    The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.

  1. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  2. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  3. Evaluation of Digital Camera Technology For Bridge Inspection

    DOT National Transportation Integrated Search

    1997-07-18

    As part of a cooperative agreement between the Tennessee Department of Transportation and the Federal Highway Administration, a study was conducted to evaluate current levels of digital camera and color printing technology with regard to their applic...

  4. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  5. Digital dental photography. Part 4: choosing a camera.

    PubMed

    Ahmad, I

    2009-06-13

    With so many cameras and systems on the market, making a choice of the right one for your practice needs is a daunting task. As described in Part 1 of this series, a digital single reflex (DSLR) camera is an ideal choice for dental use in enabling the taking of portraits, close-up or macro images of the dentition and study casts. However, for the sake of completion, some other cameras systems that are used in dentistry are also discussed.

  6. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  7. Use of a new high-speed digital data acquisition system in airborne ice-sounding

    USGS Publications Warehouse

    Wright, David L.; Bradley, Jerry A.; Hodge, Steven M.

    1989-01-01

    A high-speed digital data acquisition and signal averaging system for borehole, surface, and airborne radio-frequency geophysical measurements was designed and built by the US Geological Survey. The system permits signal averaging at rates high enough to achieve significant signal-to-noise enhancement in profiling, even in airborne applications. The first field use of the system took place in Greenland in 1987 for recording data on a 150 by 150-km grid centered on the summit of the Greenland ice sheet. About 6000-line km were flown and recorded using the new system. The data can be used to aid in siting a proposed scientific corehole through the ice sheet.

  8. Preliminary Study of UAS Equipped with Thermal Camera for Volcanic Geothermal Monitoring in Taiwan

    PubMed Central

    Chio, Shih-Hong; Lin, Cheng-Horng

    2017-01-01

    Thermal infrared cameras sense the temperature information of sensed scenes. With the development of UASs (Unmanned Aircraft Systems), thermal infrared cameras can now be carried on a quadcopter UAV (Unmanned Aircraft Vehicle) to appropriately collect high-resolution thermal images for volcanic geothermal monitoring in a local area. Therefore, the quadcopter UAS used to acquire thermal images for volcanic geothermal monitoring has been developed in Taiwan as part of this study to overcome the difficult terrain with highly variable topography and extreme environmental conditions. An XM6 thermal infrared camera was employed in this thermal image collection system. The Trimble BD970 GNSS (Global Navigation Satellite System) OEM (Original Equipment Manufacturer) board was also carried on the quadcopter UAV to gather dual-frequency GNSS observations in order to determine the flying trajectory data by using the Post-Processed Kinematic (PPK) technique; this will be used to establish the position and orientation of collected thermal images with less ground control points (GCPs). The digital surface model (DSM) and thermal orthoimages were then produced from collected thermal images. Tests conducted in the Hsiaoyukeng area of Taiwan’s Yangmingshan National Park show that the difference between produced DSM and airborne LIDAR (Light Detection and Ranging) data are about 37% between −1 m and 1 m, and 66% between −2 m and 2 m in the area surrounded by GCPs. As the accuracy of thermal orthoimages is about 1.78 m, it is deemed sufficient for volcanic geothermal monitoring. In addition, the thermal orthoimages show some phenomena not only more globally than do the traditional methods for volcanic geothermal monitoring, but they also show that the developed system can be further employed in Taiwan in the future. PMID:28718790

  9. Preliminary Study of UAS Equipped with Thermal Camera for Volcanic Geothermal Monitoring in Taiwan.

    PubMed

    Chio, Shih-Hong; Lin, Cheng-Horng

    2017-07-18

    Thermal infrared cameras sense the temperature information of sensed scenes. With the development of UASs (Unmanned Aircraft Systems), thermal infrared cameras can now be carried on a quadcopter UAV (Unmanned Aircraft Vehicle) to appropriately collect high-resolution thermal images for volcanic geothermal monitoring in a local area. Therefore, the quadcopter UAS used to acquire thermal images for volcanic geothermal monitoring has been developed in Taiwan as part of this study to overcome the difficult terrain with highly variable topography and extreme environmental conditions. An XM6 thermal infrared camera was employed in this thermal image collection system. The Trimble BD970 GNSS (Global Navigation Satellite System) OEM (Original Equipment Manufacturer) board was also carried on the quadcopter UAV to gather dual-frequency GNSS observations in order to determine the flying trajectory data by using the Post-Processed Kinematic (PPK) technique; this will be used to establish the position and orientation of collected thermal images with less ground control points (GCPs). The digital surface model (DSM) and thermal orthoimages were then produced from collected thermal images. Tests conducted in the Hsiaoyukeng area of Taiwan's Yangmingshan National Park show that the difference between produced DSM and airborne LIDAR (Light Detection and Ranging) data are about 37% between -1 m and 1 m, and 66% between -2 m and 2 m in the area surrounded by GCPs. As the accuracy of thermal orthoimages is about 1.78 m, it is deemed sufficient for volcanic geothermal monitoring. In addition, the thermal orthoimages show some phenomena not only more globally than do the traditional methods for volcanic geothermal monitoring, but they also show that the developed system can be further employed in Taiwan in the future.

  10. Digital Earth Watch: Investigating the World with Digital Cameras

    NASA Astrophysics Data System (ADS)

    Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.

    2015-12-01

    Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu

  11. Video systems for real-time oil-spill detection

    NASA Technical Reports Server (NTRS)

    Millard, J. P.; Arvesen, J. C.; Lewis, P. L.; Woolever, G. F.

    1973-01-01

    Three airborne television systems are being developed to evaluate techniques for oil-spill surveillance. These include a conventional TV camera, two cameras operating in a subtractive mode, and a field-sequential camera. False-color enhancement and wavelength and polarization filtering are also employed. The first of a series of flight tests indicates that an appropriately filtered conventional TV camera is a relatively inexpensive method of improving contrast between oil and water. False-color enhancement improves the contrast, but the problem caused by sun glint now limits the application to overcast days. Future effort will be aimed toward a one-camera system. Solving the sun-glint problem and developing the field-sequential camera into an operable system offers potential for color 'flagging' oil on water.

  12. A digital ISO expansion technique for digital cameras

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong

    2010-01-01

    Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.

  13. Integration of near-surface remote sensing and eddy covariance measurements: new insights on managed ecosystem structure and functioning

    NASA Astrophysics Data System (ADS)

    Hatala, J.; Sonnentag, O.; Detto, M.; Runkle, B.; Vargas, R.; Kelly, M.; Baldocchi, D. D.

    2009-12-01

    Ground-based, visible light imagery has been used for different purposes in agricultural and ecological research. A series of recent studies explored the utilization of networked digital cameras to continuously monitor vegetation by taking oblique canopy images at fixed view angles and time intervals. In our contribution we combine high temporal resolution digital camera imagery, eddy-covariance, and meteorological measurements with weekly field-based hyperspectral and LAI measurements to gain new insights on temporal changes in canopy structure and functioning of two managed ecosystems in California’s Sacramento-San Joaquin River Delta: a pasture infested by the invasive perennial pepperweed (Lepidium latifolium) and a rice plantation (Oryza sativa). Specific questions we address are: a) how does year-round grazing affect pepperweed canopy development, b) is it possible to identify phenological key events of managed ecosystems (pepperweed: flowering; rice: heading) from the limited spectral information of digital camera imagery, c) is a simple greenness index derived from digital camera imagery sufficient to track leaf area index and canopy development of managed ecosystems, and d) what are the scales of temporal correlation between digital camera signals and carbon and water fluxes of managed ecosystems? Preliminary results for the pasture-pepperweed ecosystem show that year-round grazing inhibits the accumulation of dead stalks causing earlier green-up and that digital camera imagery is well suited to capture the onset of flowering and the associated decrease in photosynthetic CO2 uptake. Results from our analyses are of great relevance from both a global environmental change and land management perspective.

  14. Experimental Advanced Airborne Research Lidar (EAARL) Data Processing Manual

    USGS Publications Warehouse

    Bonisteel, Jamie M.; Nayegandhi, Amar; Wright, C. Wayne; Brock, John C.; Nagle, David

    2009-01-01

    The Experimental Advanced Airborne Research Lidar (EAARL) is an example of a Light Detection and Ranging (Lidar) system that utilizes a blue-green wavelength (532 nanometers) to determine the distance to an object. The distance is determined by recording the travel time of a transmitted pulse at the speed of light (fig. 1). This system uses raster laser scanning with full-waveform (multi-peak) resolving capabilities to measure submerged topography and adjacent coastal land elevations simultaneously (Nayegandhi and others, 2009). This document reviews procedures for the post-processing of EAARL data using the custom-built Airborne Lidar Processing System (ALPS). ALPS software was developed in an open-source programming environment operated on a Linux platform. It has the ability to combine the laser return backscatter digitized at 1-nanosecond intervals with aircraft positioning information. This solution enables the exploration and processing of the EAARL data in an interactive or batch mode. ALPS also includes modules for the creation of bare earth, canopy-top, and submerged topography Digital Elevation Models (DEMs). The EAARL system uses an Earth-centered coordinate and reference system that removes the necessity to reference submerged topography data relative to water level or tide gages (Nayegandhi and others, 2006). The EAARL system can be mounted in an array of small twin-engine aircraft that operate at 300 meters above ground level (AGL) at a speed of 60 meters per second (117 knots). While other systems strive to maximize operational depth limits, EAARL has a narrow transmit beam and receiver field of view (1.5 to 2 milliradians), which improves the depth-measurement accuracy in shallow, clear water but limits the maximum depth to about 1.5 Secchi disk depth (~20 meters) in clear water. The laser transmitter [Continuum EPO-5000 yttrium aluminum garnet (YAG)] produces up to 5,000 short-duration (1.2 nanosecond), low-power (70 microjoules) pulses each second. Each pulse is focused into an illumination area that has a radius of about 20 centimeters on the ground. The pulse-repetition frequency of the EAARL transmitter varies along each across-track scan to produce equal cross-track sample spacing and near uniform density (Nayegandhi and others, 2006). Targets can have varying physical and optical characteristics that cause extreme fluctuations in laser backscatter complexity and signal strength. To accommodate this dynamic range, EAARL has the real-time ability to detect, capture, and automatically adapt to each laser return backscatter. The backscattered energy is collected by an array of four high-speed waveform digitizers connected to an array of four sub-nanosecond photodetectors. Each of the four photodetectors receives a finite range of the returning laser backscatter photons. The most sensitive channel receives 90% of the photons, the least sensitive receives 0.9%, and the middle channel receives 9% (Wright and Brock, 2002). The fourth channel is available for detection but is not currently being utilized. All four channels are digitized simultaneously into 65,536 samples for every laser pulse. Receiver optics consists of a 15-centimeter-diameter dielectric-coated Newtonian telescope, a computer-driven raster scanning mirror oscillating at 12.5 hertz (25 rasters per second), and an array of sub-nanosecond photodetectors. The signal emitted by the pulsed laser transmitter is amplified as backscatter by the optical telescope receiver. The photomultiplier tube (PMT) then converts the optical energy into electrical impulses (Nayegandhi and others, 2006). In addition to the full-waveform resolving laser, the EAARL sensor suite includes a down-looking 70-centimeter-resolution Red-Green-Blue (RGB) digital network camera, a high-resolution color infrared (CIR) multispectral camera (14-centimeter-resolution), two precision dual-frequency kinematic carrier-phase global positioning system (GPS) receivers, and an

  15. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  16. Suitability of ground-based SfM-MVS for monitoring glacial and periglacial processes

    NASA Astrophysics Data System (ADS)

    Piermattei, Livia; Carturan, Luca; de Blasi, Fabrizio; Tarolli, Paolo; Dalla Fontana, Giancarlo; Vettore, Antonio; Pfeifer, Norbert

    2016-05-01

    Photo-based surface reconstruction is rapidly emerging as an alternative survey technique to lidar (light detection and ranging) in many fields of geoscience fostered by the recent development of computer vision algorithms such as structure from motion (SfM) and dense image matching such as multi-view stereo (MVS). The objectives of this work are to test the suitability of the ground-based SfM-MVS approach for calculating the geodetic mass balance of a 2.1 km2 glacier and for detecting the surface displacement of a neighbouring active rock glacier located in the eastern Italian Alps. The photos were acquired in 2013 and 2014 using a digital consumer-grade camera during single-day field surveys. Airborne laser scanning (ALS, otherwise known as airborne lidar) data were used as benchmarks to estimate the accuracy of the photogrammetric digital elevation models (DEMs) and the reliability of the method. The SfM-MVS approach enabled the reconstruction of high-quality DEMs, which provided estimates of glacial and periglacial processes similar to those achievable using ALS. In stable bedrock areas outside the glacier, the mean and the standard deviation of the elevation difference between the SfM-MVS DEM and the ALS DEM was -0.42 ± 1.72 and 0.03 ± 0.74 m in 2013 and 2014, respectively. The overall pattern of elevation loss and gain on the glacier were similar with both methods, ranging between -5.53 and + 3.48 m. In the rock glacier area, the elevation difference between the SfM-MVS DEM and the ALS DEM was 0.02 ± 0.17 m. The SfM-MVS was able to reproduce the patterns and the magnitudes of displacement of the rock glacier observed by the ALS, ranging between 0.00 and 0.48 m per year. The use of natural targets as ground control points, the occurrence of shadowed and low-contrast areas, and in particular the suboptimal camera network geometry imposed by the morphology of the study area were the main factors affecting the accuracy of photogrammetric DEMs negatively. Technical improvements such as using an aerial platform and/or placing artificial targets could significantly improve the results but run the risk of being more demanding in terms of costs and logistics.

  17. Digital data from the Great Sand Dunes airborne gravity gradient survey, south-central Colorado

    USGS Publications Warehouse

    Drenth, B.J.; Abraham, J.D.; Grauch, V.J.S.; Labson, V.F.; Hodges, G.

    2013-01-01

    This report contains digital data and supporting explanatory files describing data types, data formats, and survey procedures for a high-resolution airborne gravity gradient (AGG) survey at Great Sand Dunes National Park, Alamosa and Saguache Counties, south-central Colorado. In the San Luis Valley, the Great Sand Dunes survey covers a large part of Great Sand Dunes National Park and Preserve. The data described were collected from a high-resolution AGG survey flown in February 2012, by Fugro Airborne Surveys Corp., on contract to the U.S. Geological Survey. Scientific objectives of the AGG survey are to investigate the subsurface structural framework that may influence groundwater hydrology and seismic hazards, and to investigate AGG methods and resolution using different flight specifications. Funding was provided by an airborne geophysics training program of the U.S. Department of Defense's Task Force for Business & Stability Operations.

  18. [Intra-oral digital photography with the non professional camera--simplicity and effectiveness at a low price].

    PubMed

    Sackstein, M

    2006-10-01

    Over the last five years digital photography has become ubiquitous. For the family photo album, a 4 or 5 megapixel camera costing about 2000 NIS will produce satisfactory results for most people. However, for intra-oral photography the common wisdom holds that only professional photographic equipment is up to the task. Such equipment typically costs around 12,000 NIS and includes the camera body, an attachable macro lens and a ringflash. The following article challenges this conception. Although professional equipment does produce the most exemplary results, a highly effective database of clinical pictures can be compiled even with a "non-professional" digital camera. Since the year 2002, my clinical work has been routinely documented with digital cameras of the Nikon CoolPix series. The advantages are that these digicams are economical both in price and in size and allow easy transport and operation when compared to their expensive and bulky professional counterparts. The details of how to use a non-professional digicam to produce and maintain an effective clinical picture database, for documentation, monitoring, demonstration and professional fulfillment, are described below.

  19. A study on airborne integrated display system and human information processing

    NASA Technical Reports Server (NTRS)

    Mizumoto, K.; Iwamoto, H.; Shimizu, S.; Kuroda, I.

    1983-01-01

    The cognitive behavior of pilots was examined in an experiment involving mock ups of an eight display electronic attitude direction indicator for an airborne integrated display. Displays were presented in digital, analog digital, and analog format to experienced pilots. Two tests were run, one involving the speed of memorization in a single exposure and the other comprising two five second exposures spaced 30 sec apart. Errors increased with the speed of memorization. Generally, the analog information was assimilated faster than the digital data, with regard to the response speed. Information processing was quantified as 25 bits for the first five second exposure and 15 bits during the second.

  20. Comparison between different cost devices for digital capture of X-ray films: an image characteristics detection approach.

    PubMed

    Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés

    2012-02-01

    A common teleradiology practice is digitizing films. The costs of specialized digitizers are very high, that is why there is a trend to use conventional scanners and digital cameras. Statistical clinical studies are required to determine the accuracy of these devices, which are very difficult to carry out. The purpose of this study was to compare three capture devices in terms of their capacity to detect several image characteristics. Spatial resolution, contrast, gray levels, and geometric deformation were compared for a specialized digitizer ICR (US$ 15,000), a conventional scanner UMAX (US$ 1,800), and a digital camera LUMIX (US$ 450, but require an additional support system and a light box for about US$ 400). Test patterns printed in films were used. The results detected gray levels lower than real values for all three devices; acceptable contrast and low geometric deformation with three devices. All three devices are appropriate solutions, but a digital camera requires more operator training and more settings.

  1. Digital photography for the light microscope: results with a gated, video-rate CCD camera and NIH-image software.

    PubMed

    Shaw, S L; Salmon, E D; Quatrano, R S

    1995-12-01

    In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.

  2. Designing for Diverse Classrooms: Using iPpads and Digital Cameras to Compose eBooks with Emergent Bilingual/Biliterate Four-Year-Olds

    ERIC Educational Resources Information Center

    Rowe, Deborah Wells; Miller, Mary E.

    2016-01-01

    This paper reports the findings of a two-year design study exploring instructional conditions supporting emerging, bilingual/biliterate, four-year-olds' digital composing. With adult support, children used child-friendly, digital cameras and iPads equipped with writing, drawing and bookmaking apps to compose multimodal, multilingual eBooks…

  3. 2010 A Digital Odyssey: Exploring Document Camera Technology and Computer Self-Efficacy in a Digital Era

    ERIC Educational Resources Information Center

    Hoge, Robert Joaquin

    2010-01-01

    Within the sphere of education, navigating throughout a digital world has become a matter of necessity for the developing professional, as with the advent of Document Camera Technology (DCT). This study explores the pedagogical implications of implementing DCT; to see if there is a relationship between teachers' comfort with DCT and to the…

  4. Digital Diversity: A Basic Tool with Lots of Uses

    ERIC Educational Resources Information Center

    Coy, Mary

    2006-01-01

    In this article the author relates how the digital camera has altered the way she teaches and the way her students learn. She also emphasizes the importance for teachers to have software that can edit, print, and incorporate photos. She cites several instances in which a digital camera can be used: (1) PowerPoint presentations; (2) Open house; (3)…

  5. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  6. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  7. Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.

    2000-01-01

    This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.

  8. Integrating TV/digital data spectrograph system

    NASA Technical Reports Server (NTRS)

    Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.

    1975-01-01

    A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.

  9. Printed products for digital cameras and mobile devices

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2005-01-01

    Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.

  10. Positioning in Time and Space - Cost-Effective Exterior Orientation for Airborne Archaeological Photographs

    NASA Astrophysics Data System (ADS)

    Verhoeven, G.; Wieser, M.; Briese, C.; Doneus, M.

    2013-07-01

    Since manned, airborne aerial reconnaissance for archaeological purposes is often characterised by more-or-less random photographing of archaeological features on the Earth, the exact position and orientation of the camera during image acquisition becomes very important in an effective inventorying and interpretation workflow of these aerial photographs. Although the positioning is generally achieved by simultaneously logging the flight path or directly recording the camera's position with a GNSS receiver, this approach does not allow to record the necessary roll, pitch and yaw angles of the camera. The latter are essential elements for the complete exterior orientation of the camera, which allows - together with the inner orientation of the camera - to accurately define the portion of the Earth recorded in the photograph. This paper proposes a cost-effective, accurate and precise GNSS/IMU solution (image position: 2.5 m and orientation: 2°, both at 1σ) to record all essential exterior orientation parameters for the direct georeferencing of the images. After the introduction of the utilised hardware, this paper presents the developed software that allows recording and estimating these parameters. Furthermore, this direct georeferencing information can be embedded into the image's metadata. Subsequently, the first results of the estimation of the mounting calibration (i.e. the misalignment between the camera and GNSS/IMU coordinate frame) are provided. Furthermore, a comparison with a dedicated commercial photographic GNSS/IMU solution will prove the superiority of the introduced solution. Finally, an outlook on future tests and improvements finalises this article.

  11. Modeling of digital information optical encryption system with spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.

    2015-10-01

    State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.

  12. Toward a digital camera to rival the human eye

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2011-07-01

    All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.

  13. Teaching with Technology: Step Back and Hand over the Cameras! Using Digital Cameras to Facilitate Mathematics Learning with Young Children in K-2 Classrooms

    ERIC Educational Resources Information Center

    Northcote, Maria

    2011-01-01

    Digital cameras are now commonplace in many classrooms and in the lives of many children in early childhood centres and primary schools. They are regularly used by adults and teachers for "saving special moments and documenting experiences." The use of previously expensive photographic and recording equipment has often remained in the domain of…

  14. A stereoscopic lens for digital cinema cameras

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  15. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  16. A direct-view customer-oriented digital holographic camera

    NASA Astrophysics Data System (ADS)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  17. High-speed line-scan camera with digital time delay integration

    NASA Astrophysics Data System (ADS)

    Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.

  18. The influence of the in situ camera calibration for direct georeferencing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Barrios, R.; Centeno, J.

    2014-11-01

    The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs from the in situ camera calibration improve significantly the accuracies of the direct georeferencing. The obtained results from the experiments are shown and discussed.

  19. The HRSC on Mars Express: Mert Davies' Involvement in a Novel Planetary Cartography Experiment

    NASA Astrophysics Data System (ADS)

    Oberst, J.; Waehlisch, M.; Giese, B.; Scholten, F.; Hoffmann, H.; Jaumann, R.; Neukum, G.

    2002-12-01

    Mert Davies was a team member of the HRSC (High Resolution Stereo Camera) imaging experiment (PI: Gerhard Neukum) on ESA's Mars Express mission. This pushbroom camera is equipped with 9 forward- and backward-looking CCD lines, 5184 samples each, mounted in parallel, perpendicular to the spacecraft velocity vector. Flight image data with resolutions of up to 10m/pix (from an altitude of 250 km) will be acquired line by line as the spacecraft moves. This acquisition strategy will result in 9 separate almost completely overlapping image strips, each of them having more than 27,000 image lines, typically. [HRSC is also equipped with a superresolution channel for imaging of selected targets at up to 2.3 m/pixel]. The combined operation of the nadir and off-nadir CCD lines (+18.9°, 0°, -18.9°) gives HRSC a triple-stereo capability for precision mapping of surface topography and for modelling of spacecraft orbit- and camera pointing errors. The goals of the camera are to obtain accurate control point networks, Digital Elevation Models (DEMs) in Mars-fixed coordinates, and color orthoimages at global (100% of the surface will be covered with resolutions better than 30m/pixel) and local scales. With his long experience in all aspects of planetary geodesy and cartography, Mert Davies was involved in the preparations of this novel Mars imaging experiment which included: (a) development of a ground data system for the analysis of triple-stereo images, (b) camera testing during airborne imaging campaigns, (c) re-analysis of the Mars control point network, and generation of global topographic orthoimage maps on the basis of MOC images and MOLA data, (d) definition of the quadrangle scheme for a new topographic image map series 1:200K, (e) simulation of synthetic HRSC imaging sequences and their photogrammetric analysis. Mars Express is scheduled for launch in May of 2003. We miss Mert very much!

  20. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  1. A New Hyperspectral Designed for Small UAS Tested in Real World Applications

    NASA Astrophysics Data System (ADS)

    Marcucci, E.; Saiet, E., II; Hatfield, M. C.

    2014-12-01

    The ability to investigate landscape and vegetation from airborne instruments offers many advantages, including high resolution data, ability to deploy instruments over a specific area, and repeat measurements. The Alaska Center for Unmanned Aircraft Systems Integration (ACUASI) has recently integrated a hyperspectral imaging camera onto their Ptarmigan hexacopter. The Rikola Hyperspectral Camera manufactured by VTT and Rikola, Ltd. is capable of obtaining data within the 400-950 nm range with an accuracy of ~1 nm. Using the compact flash on the UAV limits the maximum number of channels to 24 this summer. The camera uses a single frame to sequentially record the spectral bands of interest in a 37° field-of-view. Because the camera collects data as single frames it takes a finite amount of time to compile the complete spectral. Although each frame takes only 5 nanoseconds, co-registration of frames is still required. The hovering ability of the hexacopter helps eliminate frame shift. GPS records data for incorporation into a larger dataset. Conservatively, the Ptarmigan can fly at an altitude of 400 feet, for 15 minutes, and 7000 feet away from the operator. The airborne hyperspectral instrument will be extremely useful to scientists as a platform that can provide data on-request. Since the spectral range of the camera is ideal for the study of vegetation, this study 1) examines seasonal changes of vegetation of the Fairbanks area, 2) ground-truths satellite measurements, and 3) ties vegetation conditions around a weather tower to the tower readings. Through this proof of concept, ACUASI provides a means for scientists to request the most up-to-date and location-specific data for their field sites. Additionally, the resolution of the airborne instruments is much higher than that of satellite data, these may be readily tasked, and they have the advantage over manned flights in terms of manpower and cost.

  2. Catchment-Scale Terrain Modelling with Structure-from-Motion Photogrammetry: a replacement for airborne lidar?

    NASA Astrophysics Data System (ADS)

    Brasington, J.

    2015-12-01

    Over the last five years, Structure-from-Motion photogrammetry has dramatically democratized the availability of high quality topographic data. This approach involves the use of a non-linear bundle adjustment to estimate simultaneously camera position, pose, distortion and 3D model coordinates. In contrast to traditional aerial photogrammetry, the bundle adjustment is typically solved without external constraints and instead ground control is used a posteriori to transform the modelled coordinates to an established datum using a similarity transformation. The limited data requirements, coupled with the ability to self-calibrate compact cameras, has led to a burgeoning of applications using low-cost imagery acquired terrestrially or from low-altitude platforms. To date, most applications have focused on relatively small spatial scales where relaxed logistics permit the use of dense ground control and high resolution, close-range photography. It is less clear whether this low-cost approach can be successfully upscaled to tackle larger, watershed-scale projects extending over 102-3 km2 where it could offer a competitive alternative to landscape modelling with airborne lidar. At such scales, compromises over the density of ground control, the speed and height of sensor platform and related image properties are inevitable. In this presentation we provide a systematic assessment of large-scale SfM terrain products derived for over 80 km2 of the braided Dart River and its catchment in the Southern Alps of NZ. Reference data in the form of airborne and terrestrial lidar are used to quantify the quality of 3D reconstructions derived from helicopter photography and used to establish baseline uncertainty models for geomorphic change detection. Results indicate that camera network design is a key determinant of model quality, and that standard aerial networks based on strips of nadir photography can lead to unstable camera calibration and systematic errors that are difficult to model with sparse ground control. We demonstrate how a low cost multi-camera platform providing both nadir and oblique imagery can support robust camera calibration, enabling the generation of high quality, large-scale terrain products that are suitable for precision fluvial change detection.

  3. Catchment-Scale Terrain Modelling with Structure-from-Motion Photogrammetry: a replacement for airborne lidar?

    NASA Astrophysics Data System (ADS)

    Brasington, James; James, Joe; Cook, Simon; Cox, Simon; Lotsari, Eliisa; McColl, Sam; Lehane, Niall; Williams, Richard; Vericat, Damia

    2016-04-01

    In recent years, 3D terrain reconstructions based on Structure-from-Motion photogrammetry have dramatically democratized the availability of high quality topographic data. This approach involves the use of a non-linear bundle adjustment to estimate simultaneously camera position, pose, distortion and 3D model coordinates. In contrast to traditional aerial photogrammetry, the bundle adjustment is typically solved without external constraints and instead ground control is used a posteriori to transform the modelled coordinates to an established datum using a similarity transformation. The limited data requirements, coupled with the ability to self-calibrate compact cameras, has led to a burgeoning of applications using low-cost imagery acquired terrestrially or from low-altitude platforms. To date, most applications have focused on relatively small spatial scales (0.1-5 Ha), where relaxed logistics permit the use of dense ground control networks and high resolution, close-range photography. It is less clear whether this low-cost approach can be successfully upscaled to tackle larger, watershed-scale projects extending over 102-3 km2 where it could offer a competitive alternative to established landscape modelling with airborne lidar. At such scales, compromises over the density of ground control, the speed and height of sensor platform and related image properties are inevitable. In this presentation we provide a systematic assessment of the quality of large-scale SfM terrain products derived for over 80 km2 of the braided Dart River and its catchment in the Southern Alps of NZ. Reference data in the form of airborne and terrestrial lidar are used to quantify the quality of 3D reconstructions derived from helicopter photography and used to establish baseline uncertainty models for geomorphic change detection. Results indicate that camera network design is a key determinant of model quality, and that standard aerial photogrammetric networks based on strips of nadir photography can lead to unstable camera calibration and systematic errors that are difficult to model with sparse ground control. We demonstrate how a low cost multi-camera platform providing both nadir and oblique imagery can support robust camera calibration, enabling the generation of high quality, large-scale terrain products that are suitable for precision fluvial change detection.

  4. Airborne Imagery Collections Barrow 2013

    DOE Data Explorer

    Cherry, Jessica; Crowder, Kerri

    2015-07-20

    The data here are orthomosaics, digital surface models (DSMs), and individual frames captured during low altitude airborne flights in 2013 at the Barrow Environmental Observatory. The orthomosaics, thermal IR mosaics, and DSMs were generated from the individual frames using Structure from Motion techniques.

  5. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  6. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.

  7. Meteor Film Recording with Digital Film Cameras with large CMOS Sensors

    NASA Astrophysics Data System (ADS)

    Slansky, P. C.

    2016-12-01

    In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.

  8. Forensics for flatbed scanners

    NASA Astrophysics Data System (ADS)

    Gloe, Thomas; Franz, Elke; Winkler, Antje

    2007-02-01

    Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.

  9. Remote camera observations of lava dome growth at Mount St. Helens, Washington, October 2004 to February 2006: Chapter 11 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.

  10. Formal methods and digital systems validation for airborne systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1993-01-01

    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992.

  11. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  12. An assessment of the utility of a non-metric digital camera for measuring standing trees

    Treesearch

    Neil Clark; Randolph H. Wynne; Daniel L. Schmoldt; Matthew F. Winn

    2000-01-01

    Images acquired with a commercially available digital camera were used to make measurements on 20 red oak (Quercus spp.) stems. The ranges of diameter at breast height (DBH) and height to a 10 cm upper-stem diameter were 16-66 cm and 12-20 m, respectively. Camera stations located 3, 6, 9, 12, and 15 m from the stem were studied to determine the best distance to be...

  13. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  14. Estimation of spectral distribution of sky radiance using a commercial digital camera.

    PubMed

    Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao

    2016-01-10

    Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.

  15. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  16. Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications

    DTIC Science & Technology

    2013-06-01

    Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable

  17. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  18. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  19. Digital Camera Project Fosters Communication Skills

    ERIC Educational Resources Information Center

    Fisher, Ashley; Lazaros, Edward J.

    2009-01-01

    This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…

  20. Lidar-based mapping of flood control levees in south Louisiana

    USGS Publications Warehouse

    Thatcher, Cindy A.; Lim, Samsung; Palaseanu-Lovejoy, Monica; Danielson, Jeffrey J.; Kimbrow, Dustin R.

    2016-01-01

    Flood protection in south Louisiana is largely dependent on earthen levees, and in the aftermath of Hurricane Katrina the state’s levee system has received intense scrutiny. Accurate elevation data along the levees are critical to local levee district managers responsible for monitoring and maintaining the extensive system of non-federal levees in coastal Louisiana. In 2012, high resolution airborne lidar data were acquired over levees in Lafourche Parish, Louisiana, and a mobile terrestrial lidar survey was conducted for selected levee segments using a terrestrial lidar scanner mounted on a truck. The mobile terrestrial lidar data were collected to test the feasibility of using this relatively new technology to map flood control levees and to compare the accuracy of the terrestrial and airborne lidar. Metrics assessing levee geometry derived from the two lidar surveys are also presented as an efficient, comprehensive method to quantify levee height and stability. The vertical root mean square error values of the terrestrial lidar and airborne lidar digital-derived digital terrain models were 0.038 m and 0.055 m, respectively. The comparison of levee metrics derived from the airborne and terrestrial lidar-based digital terrain models showed that both types of lidar yielded similar results, indicating that either or both surveying techniques could be used to monitor geomorphic change over time. Because airborne lidar is costly, many parts of the USA and other countries have never been mapped with airborne lidar, and repeat surveys are often not available for change detection studies. Terrestrial lidar provides a practical option for conducting repeat surveys of levees and other terrain features that cover a relatively small area, such as eroding cliffs or stream banks, and dunes.

  1. Research on airborne infrared leakage detection of natural gas pipeline

    NASA Astrophysics Data System (ADS)

    Tan, Dongjie; Xu, Bin; Xu, Xu; Wang, Hongchao; Yu, Dongliang; Tian, Shengjie

    2011-12-01

    An airborne laser remote sensing technology is proposed to detect natural gas pipeline leakage in helicopter which carrying a detector, and the detector can detect a high spatial resolution of trace of methane on the ground. The principle of the airborne laser remote sensing system is based on tunable diode laser absorption spectroscopy (TDLAS). The system consists of an optical unit containing the laser, camera, helicopter mount, electronic unit with DGPS antenna, a notebook computer and a pilot monitor. And the system is mounted on a helicopter. The principle and the architecture of the airborne laser remote sensing system are presented. Field test experiments are carried out on West-East Natural Gas Pipeline of China, and the results show that airborne detection method is suitable for detecting gas leak of pipeline on plain, desert, hills but unfit for the area with large altitude diversification.

  2. 50 CFR 216.155 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...

  3. Digital Video Cameras for Brainstorming and Outlining: The Process and Potential

    ERIC Educational Resources Information Center

    Unger, John A.; Scullion, Vicki A.

    2013-01-01

    This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…

  4. PhenoCam Dataset v1.0: Vegetation Phenology from Digital Camera Imagery, 2000-2015

    USDA-ARS?s Scientific Manuscript database

    This data set provides a time series of vegetation phenological observations for 133 sites across diverse ecosystems of North America and Europe from 2000-2015. The phenology data were derived from conventional visible-wavelength automated digital camera imagery collected through the PhenoCam Networ...

  5. Evaluation of a novel laparoscopic camera for characterization of renal ischemia in a porcine model using digital light processing (DLP) hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.

    2012-03-01

    Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.

  6. ARC-2009-ACD09-0218-005

    NASA Image and Video Library

    2009-10-06

    NASA Conducts Airborne Science Aboard Zeppelin Airship: equipped with two imaging instruments enabling remote sensing and atmospheric science measurements not previously practical. Hyperspectral imager and large format camera mounted inside the Zeppelin nose fairing.

  7. Mapping Land and Water Surface Topography with instantaneous Structure from Motion

    NASA Astrophysics Data System (ADS)

    Dietrich, J.; Fonstad, M. A.

    2012-12-01

    Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.

  8. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  9. Cost-effective handling of digital medical images in the telemedicine environment.

    PubMed

    Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel

    2007-09-01

    This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd

  10. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  11. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    NASA Astrophysics Data System (ADS)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  12. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  13. Bringing the Digital Camera to the Physics Lab

    ERIC Educational Resources Information Center

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-01-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…

  14. Development of a digital camera tree evaluation system

    Treesearch

    Neil Clark; Daniel L. Schmoldt; Philip A. Araman

    2000-01-01

    Within the Strategic Plan for Forest Inventory and Monitoring (USDA Forest Service 1998), there is a call to "conduct applied research in the use of [advanced technology] towards the end of increasing the operational efficiency and effectiveness of our program". The digital camera tree evaluation system is part of that research, aimed at decreasing field...

  15. Distributing digital video to multiple computers

    PubMed Central

    Murray, James A.

    2004-01-01

    Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464

  16. Testing and Validation of Timing Properties for High Speed Digital Cameras - A Best Practices Guide

    DTIC Science & Technology

    2016-07-27

    a five year plan to begin replacing its inventory of antiquated film and video systems with more modern and capable digital systems. As evidenced in...installation, testing, and documentation of DITCS. If shop support can be accelerated due to shifting mission priorities, this schedule can likely...assistance from the machine shop , welding shop , paint shop , and carpenter shop . Testing the DITCS system will require a KTM with digital cameras and

  17. Dynamic photoelasticity by TDI imaging

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    2001-06-01

    High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.

  18. Comparison of 10 digital SLR cameras for orthodontic photography.

    PubMed

    Bister, D; Mordarai, F; Aveling, R M

    2006-09-01

    Digital photography is now widely used to document orthodontic patients. High quality intra-oral photography depends on a satisfactory 'depth of field' focus and good illumination. Automatic 'through the lens' (TTL) metering is ideal to achieve both the above aims. Ten current digital single lens reflex (SLR) cameras were tested for use in intra- and extra-oral photography as used in orthodontics. The manufacturers' recommended macro-lens and macro-flash were used with each camera. Handling characteristics, colour-reproducibility, quality of the viewfinder and flash recharge time were investigated. No camera took acceptable images in factory default setting or 'automatic' mode: this mode was not present for some cameras (Nikon, Fujifilm); led to overexposure (Olympus) or poor depth of field (Canon, Konica-Minolta, Pentax), particularly for intra-oral views. Once adjusted, only Olympus cameras were able to take intra- and extra-oral photographs without the need to change settings, and were therefore the easiest to use. All other cameras needed adjustments of aperture (Canon, Konica-Minolta, Pentax), or aperture and flash (Fujifilm, Nikon), making the latter the most complex to use. However, all cameras produced high quality intra- and extra-oral images, once appropriately adjusted. The resolution of the images is more than satisfactory for all cameras. There were significant differences relating to the quality of colour reproduction, size and brightness of the viewfinders. The Nikon D100 and Fujifilm S 3 Pro consistently scored best for colour fidelity. Pentax and Konica-Minolta had the largest and brightest viewfinders.

  19. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  20. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  1. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  2. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    NASA Astrophysics Data System (ADS)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  3. Rapid, decimeter-resolution fault zone topography mapped with Structure from Motion

    NASA Astrophysics Data System (ADS)

    Johnson, K. L.; Nissen, E.; Saripalli, S.; Arrowsmith, R.; McGarey, P.; Scharer, K. M.; Williams, P. L.

    2013-12-01

    Recent advances in the generation of high-resolution topography have revolutionized our ability to detect subtle geomorphic features related to ground-rupturing earthquakes. Currently, the most popular topographic mapping methods are airborne Light Detection And Ranging (LiDAR) and terrestrial laser scanning (TLS). Though powerful, these laser scanning methods have some inherent drawbacks: airborne LiDAR is expensive and can be logistically complicated, while TLS is time consuming even for small field sites and suffers from patchy coverage due to its restricted field-of-view. An alternative mapping technique, called Structure from Motion (SfM), builds upon traditional photogrammetry to reproduce the topography and texture of a scene from photographs taken at varying viewpoints. The improved availability of cheap, unmanned aerial vehicles (UAVs) as camera platforms further expedites data collection by covering large areas efficiently with optimal camera angles. Here, we introduce a simple and affordable UAV- or balloon-based SfM mapping system which can produce dense point clouds and sub-decimeter resolution digital elevation models (DEMs) registered to geospatial coordinates using either the photograph's GPS tags or a few ground control points across the scene. The system is ideally suited for studying ruptures of prehistoric, historic, and modern earthquakes in areas of sparse or low-lying vegetation. We use two sites from southern California faults to illustrate. The first is the ~0.1 km2 Washington Street site, located on the Banning strand of the San Andreas fault near Thousand Palms. A high-resolution DEM with ~700 point/m2 was produced from 230 photos collected on a balloon platform flying at 50 m above the ground. The second site is the Galway Lake Road site, which spans a ~1 km strip of the 1992 Mw 7.3 Landers earthquake on the Emerson Fault. The 100 point/m2 DEM was produced from 267 photos taken with a balloon platform at a height of 60 m above the ground. We compare our SfM results to existing airborne LiDAR or TLS datasets. Each SfM survey required less than 2 hours for setup and data collection, an allotment much lower than that required for TLS data collection, given the size of the sites. Processing time is somewhat slower, but depends on the quality of the DEM desired and is almost fully automated. The SfM point cloud densities we present are comparable to TLS but exceed the density of most airborne LiDAR and the orthophotos (texture maps) from the SfM are valuable complements to the DEMs. The SfM topography illuminates features along the faults that can be used to measure offsets from past ruptures, offering the potential to enhance regional seismic hazard analyses.

  4. Software development and its description for Geoid determination based on Spherical-Cap-Harmonics Modelling using digital-zenith camera and gravimetric measurements hybrid data

    NASA Astrophysics Data System (ADS)

    Morozova, K.; Jaeger, R.; Balodis, J.; Kaminskis, J.

    2017-10-01

    Over several years the Institute of Geodesy and Geoinformatics (GGI) was engaged in the design and development of a digital zenith camera. At the moment the camera developments are finished and tests by field measurements are done. In order to check these data and to use them for geoid model determination DFHRS (Digital Finite element Height reference surface (HRS)) v4.3. software is used. It is based on parametric modelling of the HRS as a continous polynomial surface. The HRS, providing the local Geoid height N, is a necessary geodetic infrastructure for a GNSS-based determination of physcial heights H from ellipsoidal GNSS heights h, by H=h-N. The research and this publication is dealing with the inclusion of the data of observed vertical deflections from digital zenith camera into the mathematical model of the DFHRS approach and software v4.3. A first target was to test out and validate the mathematical model and software, using additionally real data of the above mentioned zenith camera observations of deflections of the vertical. A second concern of the research was to analyze the results and the improvement of the Latvian quasi-geoid computation compared to the previous version HRS computed without zenith camera based deflections of the vertical. The further development of the mathematical model and software concerns the use of spherical-cap-harmonics as the designed carrier function for the DFHRS v.5. It enables - in the sense of the strict integrated geodesy approach, holding also for geodetic network adjustment - both a full gravity field and a geoid and quasi-geoid determination. In addition, it allows the inclusion of gravimetric measurements, together with deflections of the vertical from digital-zenith cameras, and all other types of observations. The theoretical description of the updated version of DFHRS software and methods are discussed in this publication.

  5. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  6. Comparative study of the polaroid and digital non-mydriatic cameras in the detection of referrable diabetic retinopathy in Australia.

    PubMed

    Phiri, R; Keeffe, J E; Harper, C A; Taylor, H R

    2006-08-01

    To show that the non-mydriatic retinal camera (NMRC) using polaroid film is as effective as the NMRC using digital imaging in detecting referrable retinopathy. A series of patients with diabetes attending the eye out-patients department at the Royal Victorian Eye and Ear Hospital had single-field non-mydriatic fundus photographs taken using first a digital and then a polaroid camera. Dilated 30 degrees seven-field stereo fundus photographs were then taken of each eye as the gold standard. The photographs were graded in a masked fashion. Retinopathy levels were defined using the simplified Wisconsin Grading system. We used the kappa statistics for inter-reader and intrareader agreement and the generalized linear model to derive the odds ratio. There were 196 participants giving 325 undilated retinal photographs. Of these participants 111 (57%) were males. The mean age of the patients was 68.8 years. There were 298 eyes with all three sets of photographs from 154 patients. The digital NMRC had a sensitivity of 86.2%[95% confidence interval (CI) 65.8, 95.3], whilst the polaroid NMRC had a sensitivity of 84.1% (95% CI 65.5, 93.7). The specificities of the two cameras were identical at 71.2% (95% CI 58.8, 81.1). There was no difference in the ability of the polaroid and digital camera to detect referrable retinopathy (odds ratio 1.06, 95% CI 0.80, 1.40, P = 0.68). This study suggests that non-mydriatic retinal photography using polaroid film is as effective as digital imaging in the detection of referrable retinopathy in countries such as the USA and Australia or others that use the same criterion for referral.

  7. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  8. Mosaicked Historic Airborne Imagery from Seward Peninsula, Alaska, Starting in the 1950's

    DOE Data Explorer

    Cherry, Jessica; Wirth, Lisa

    2016-12-06

    Historical airborne imagery for each Seward Peninsula NGEE Arctic site - Teller, Kougarok, Council - with multiple years for each site. This dataset includes mosaicked, geolocated and, where possible, orthorectified, historic airborne and recent satellite imagery. The older photos were sourced from USGS's Earth Explorer site and the newer, satellite imagery is from the Statewide Digital Mapping Initiative (SDMI) project managed by the Geographic Information Network of Alaska on behalf of the state of Alaska.

  9. Thematic Conference on Remote Sensing for Exploration Geology, 6th, Houston, TX, May 16-19, 1988, Proceedings. Volumes 1 & 2

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Papers concerning remote sensing applications for exploration geology are presented, covering topics such as remote sensing technology, data availability, frontier exploration, and exploration in mature basins. Other topics include offshore applications, geobotany, mineral exploration, engineering and environmental applications, image processing, and prospects for future developments in remote sensing for exploration geology. Consideration is given to the use of data from Landsat, MSS, TM, SAR, short wavelength IR, the Geophysical Environmental Research Airborne Scanner, gas chromatography, sonar imaging, the Airborne Visible-IR Imaging Spectrometer, field spectrometry, airborne thermal IR scanners, SPOT, AVHRR, SIR, the Large Format camera, and multitimephase satellite photographs.

  10. Bringing the Digital Camera to the Physics Lab

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-03-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as we examine in this work.

  11. Air-borne shape measurement of parabolic trough collector fields

    NASA Astrophysics Data System (ADS)

    Prahl, Christoph; Röger, Marc; Hilgert, Christoph

    2017-06-01

    The optical and thermal efficiency of parabolic trough collector solar fields is dependent on the performance and assembly accuracy of its components such as the concentrator and absorber. For the purpose of optical inspection/approval, yield analysis, localization of low performing areas, and optimization of the solar field, it is essential to create a complete view of the optical properties of the field. Existing optical measurement tools are based on ground based cameras, facing restriction concerning speed, volume and automation. QFly is an airborne qualification system which provides holistic and accurate information on geometrical, optical, and thermal properties of the entire solar field. It consists of an unmanned aerial vehicle, cameras and related software for flight path planning, data acquisition and evaluation. This article presents recent advances of the QFly measurement system and proposes a methodology on holistic qualification of the complete solar field with minimum impact on plant operation.

  12. Real Time Data/Video/Voice Uplink and Downlink for Kuiper Airborne Observatory

    NASA Technical Reports Server (NTRS)

    Harper, Doyal A.

    1997-01-01

    LFS was an educational outreach adventure which brought the excitement of astronomical exploration on NASA's Kuiper Airborne Observatory (KAO) to a nationwide audience of children, parents and children through live, interactive television, broadcast from the KAO at an altitude of 41,000 feet during an actual scientific observing mission. The project encompassed three KAO flights during the fall of 1995, including a short practice mission, a daytime observing flight between Moffett Field, California to Houston, Texas, and a nighttime mission from Houston back to Moffett Field. The University of Chicago infrared research team participated in planning the program, developing auxiliary materials including background information and lesson plans, developing software which allowed students on the ground to control the telescope and on-board cameras via the Internet from the Adler Planetarium in Chicago, and acting as on-camera correspondents to explain and answer questions about the scientific research conducted during the flights.

  13. A Digital Approach to Learning Petrology

    NASA Astrophysics Data System (ADS)

    Reid, M. R.

    2011-12-01

    In the undergraduate igneous and metamorphic petrology course at Northern Arizona University, we are employing petrographic microscopes equipped with relatively inexpensive ( $200) digital cameras that are linked to pen-tablet computers. The camera-tablet systems can assist student learning in a variety of ways. Images provided by the tablet computers can be used for helping students filter the visually complex specimens they examine. Instructors and students can simultaneously view the same petrographic features captured by the cameras and exchange information about them by pointing to salient features using the tablet pen. These images can become part of a virtual mineral/rock/texture portfolio tailored to individual student's needs. Captured digital illustrations can be annotated with digital ink or computer graphics tools; this activity emulates essential features of more traditional line drawings (visualizing an appropriate feature and selecting a representative image of it, internalizing the feature through studying and annotating it) while minimizing the frustration that many students feel about drawing. In these ways, we aim to help a student progress more efficiently from novice to expert. A number of our petrology laboratory exercises involve use of the camera-tablet systems for collaborative learning. Observational responsibilities are distributed among individual members of teams in order to increase interdependence and accountability, and to encourage efficiency. Annotated digital images are used to share students' findings and arrive at an understanding of an entire rock suite. This interdependence increases the individual's sense of responsibility for their work, and reporting out encourages students to practice use of technical vocabulary and to defend their observations. Pre- and post-course student interest in the camera-tablet systems has been assessed. In a post-course survey, the majority of students reported that, if available, they would use camera-tablet systems to capture microscope images (77%) and to make notes on images (71%). An informal focus group recommended introducing the cameras as soon as possible and having them available for making personal mineralogy/petrology portfolios. Because the stakes are perceived as high, use of the camera-tablet systems for peer-peer learning has been progressively modified to bolster student confidence in their collaborative efforts.

  14. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  15. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  16. Photogrammetric Processing of IceBridge DMS Imagery into High-Resolution Digital Surface Models (DEM and Visible Overlay)

    NASA Astrophysics Data System (ADS)

    Arvesen, J. C.; Dotson, R. C.

    2014-12-01

    The DMS (Digital Mapping System) has been a sensor component of all DC-8 and P-3 IceBridge flights since 2009 and has acquired over 3 million JPEG images over Arctic and Antarctic land and sea ice. The DMS imagery is primarily used for identifying and locating open leads for LiDAR sea-ice freeboard measurements and documenting snow and ice surface conditions. The DMS is a COTS Canon SLR camera utilizing a 28mm focal length lens, resulting in a 10cm GSD and swath of ~400 meters from a nominal flight altitude of 500 meters. Exterior orientation is provided by an Applanix IMU/GPS which records a TTL pulse coincident with image acquisition. Notable for virtually all IceBridge flights is that parallel grids are not flown and thus there is no ability to photogrammetrically tie any imagery to adjacent flight lines. Approximately 800,000 Level-3 DMS Surface Model data products have been delivered to NSIDC, each consisting of a Digital Elevation Model (GeoTIFF DEM) and a co-registered Visible Overlay (GeoJPEG). Absolute elevation accuracy for each individual Elevation Model is adjusted to concurrent Airborne Topographic Mapper (ATM) Lidar data, resulting in higher elevation accuracy than can be achieved by photogrammetry alone. The adjustment methodology forces a zero mean difference to the corresponding ATM point cloud integrated over each DMS frame. Statistics are calculated for each DMS Elevation Model frame and show RMS differences are within +/- 10 cm with respect to the ATM point cloud. The DMS Surface Model possesses similar elevation accuracy to the ATM point cloud, but with the following advantages: · Higher and uniform spatial resolution: 40 cm GSD · 45% wider swath: 435 meters vs. 300 meters at 500 meter flight altitude · Visible RGB co-registered overlay at 10 cm GSD · Enhanced visualization through 3-dimensional virtual reality (i.e. video fly-through) Examples will be presented of the utility of these advantages and a novel use of a cell phone camera for aerial photogrammetry will also be presented.

  17. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    NASA Astrophysics Data System (ADS)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  18. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  19. Erosion research with a digital camera: the structure from motion method used in gully monitoring - field experiments from southern Morocco

    NASA Astrophysics Data System (ADS)

    Kaiser, Andreas; Rock, Gilles; Neugirg, Fabian; Müller, Christoph; Ries, Johannes

    2014-05-01

    From a geoscientific view arid or semiarid landscapes are often associated with soil degrading erosion processes and thus active geomorphology. In this regard gully incision represents one of the most important influences on surface dynamics. Established approaches to monitor and quantify soil loss require costly and labor-intensive measuring methods: terrestrial or airborne LiDAR scans to create digital elevation models and unmanned airborne vehicles for image acquisition provide adequate tools for geomorphological surveying. Despite their ever advancing abilities, they are finite with their applicability in detailed recordings of complex surfaces. Especially undercuttings and plunge pools in the headcut area of gully systems are invisible or cause shadowing effects. The presented work aims to apply and advance an adequate tool to avoid the above mentioned obstacles and weaknesses of the established methods. The emerging structure from motion-based high resolution 3D-visualisation not only proved to be useful in gully erosion. Moreover, it provides a solid ground for additional applications in geosciences such as surface roughness measurements, quantification of gravitational mass movements or capturing stream connectivity. During field campaigns in semiarid southern Morocco a commercial DSLR camera was used, to produce images that served as input data for software based point cloud and mesh generation. Thus, complex land surfaces could be reconstructed entirely in high resolution by photographing the object from different perspectives. In different scales the resulting 3D-mesh represents a holistic reconstruction of the actual shape complexity with its limits set only by computing capacity. Analysis and visualization of time series of different erosion-related events illustrate the additional benefit of the method. It opens new perspectives on process understanding that can be exploited by open source and commercial software. Results depicted a soil loss of 5,28 t for a 3,5 m² area at a headcut retreat of 1,95 m after two heavy rain events. At a different site in the Souss region the depression line of a gully was lowered after channel flow and a hollow appeared while the headcut remained stable. The latter is usually interpreted as a hint for an inactive system. While formerly precise differences in volumes could only be estimated based on aerial imagery or LiDAR scans, the presented methodology allows assumptions of high quality and precision. Not only in erosion research the structure from motion-method serves as a useful, flexible and cheap means to increase detail and work efficiency.

  20. 3D Modelling of Inaccessible Areas using UAV-based Aerial Photography and Structure from Motion

    NASA Astrophysics Data System (ADS)

    Obanawa, Hiroyuki; Hayakawa, Yuichi; Gomez, Christopher

    2014-05-01

    In hardly accessible areas, the collection of 3D point-clouds using TLS (Terrestrial Laser Scanner) can be very challenging, while airborne equivalent would not give a correct account of subvertical features and concave geometries like caves. To solve such problem, the authors have experimented an aerial photography based SfM (Structure from Motion) technique on a 'peninsular-rock' surrounded on three sides by the sea at a Pacific coast in eastern Japan. The research was carried out using UAS (Unmanned Aerial System) combined with a commercial small UAV (Unmanned Aerial Vehicle) carrying a compact camera. The UAV is a DJI PHANTOM: the UAV has four rotors (quadcopter), it has a weight of 1000 g, a payload of 400 g and a maximum flight time of 15 minutes. The camera is a GoPro 'HERO3 Black Edition': resolution 12 million pixels; weight 74 g; and 0.5 sec. interval-shot. The 3D model has been constructed by digital photogrammetry using a commercial SfM software, Agisoft PhotoScan Professional®, which can generate sparse and dense point-clouds, from which polygonal models and orthophotographs can be calculated. Using the 'flight-log' and/or GCPs (Ground Control Points), the software can generate digital surface model. As a result, high-resolution aerial orthophotographs and a 3D model were obtained. The results have shown that it was possible to survey the sea cliff and the wave cut-bench, which are unobservable from land side. In details, we could observe the complexity of the sea cliff that is nearly vertical as a whole while slightly overhanging over the thinner base. The wave cut bench is nearly flat and develops extensively at the base of the cliff. Although there are some evidences of small rockfalls at the upper part of the cliff, there is no evidence of very recent activity, because no fallen rock exists on the wave cut bench. This system has several merits: firstly lower cost than the existing measuring methods such as manned-flight survey and aerial laser scanning. Secondly, compared to these other methods, the one the authors have presented also enables frequent measurements. Thirdly lightweight and compact system realizes higher applicability to various fields. However, the method is still in need of development, as the measurable range is narrower than the other airborne methods, normally up to several hectares, and data accuracy of coordinate and elevation is unknown from SfM alone.

  1. Design and Fabrication of Two-Dimensional Semiconducting Bolometer Arrays for the High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC-II)

    NASA Technical Reports Server (NTRS)

    Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. Darren; Harper, D. Al; Jhabvala, Murzy D.; hide

    2002-01-01

    The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC 11) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC "Pop-Up" Detectors (PUD's) use a unique folding technique to enable a 12 x 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar(Registered Trademark) suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 x 32-element array. Engineering results from the first light run of SHARC II at the CalTech Submillimeter Observatory (CSO) are presented.

  2. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    NASA Astrophysics Data System (ADS)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  3. Design and Fabrication of Two-Dimensional Semiconducting Bolometer Arrays for the High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC-II)

    NASA Technical Reports Server (NTRS)

    Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. Darren; Harper, D. Al; Jhabvala, Murzy D.

    2002-01-01

    The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC II) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC 'Pop-up' Detectors (PUD's) use a unique folding technique to enable a 12 x 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar(trademark) suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 x 32-element array. Engineering results from the first light run of SHARC II at the Caltech Submillimeter Observatory (CSO) are presented.

  4. Monitoring height and greenness of non-woody floodplain vegetation with UAV time series

    NASA Astrophysics Data System (ADS)

    van Iersel, Wimala; Straatsma, Menno; Addink, Elisabeth; Middelkoop, Hans

    2018-07-01

    Vegetation in river floodplains has important functions for biodiversity, but can also have a negative influence on flood safety. Floodplain vegetation is becoming increasingly heterogeneous in space and time as a result of river restoration projects. To document the spatio-temporal patterns of the floodplain vegetation, the need arises for efficient monitoring techniques. Monitoring is commonly performed by mapping floodplains based on single-epoch remote sensing data, thereby not considering seasonal dynamics of vegetation. The rising availability of unmanned airborne vehicles (UAV) increases monitoring frequency potential. Therefore, we aimed to evaluate the performance of multi-temporal high-spatial-resolution imagery, collected with a UAV, to record the dynamics in floodplain vegetation height and greenness over a growing season. Since the classification accuracy of current airborne surveys remains insufficient for low vegetation types, we focussed on seasonal variation of herbaceous and grassy vegetation with a height up to 3 m. Field reference data on vegetation height were collected six times during one year in 28 field plots within a single floodplain along the Waal River, the main distributary of the Rhine River in the Netherlands. Simultaneously with each field survey, we recorded UAV true-colour and false-colour imagery from which normalized digital surface models (nDSMs) and a consumer-grade camera vegetation index (CGCVI) were calculated. We observed that: (1) the accuracy of a UAV-derived digital terrain model (DTM) varies over the growing season and is most accurate during winter when the vegetation is dormant, (2) vegetation height can be determined from the nDSMs in leaf-on conditions via linear regression (RSME = 0.17-0.33 m), (3) the multitemporal nDSMs yielded meaningful temporal profiles of greenness and vegetation height and (4) herbaceous vegetation shows hysteresis for greenness and vegetation height, but no clear hysteresis was observed for grassland vegetation. These results show the high potential of using UAV-borne sensors for increasing the classification accuracy of low floodplain vegetation within the framework of floodplain monitoring.

  5. Automated Meteor Detection by All-Sky Digital Camera Systems

    NASA Astrophysics Data System (ADS)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  6. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.

    PubMed

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-11-17

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.

  7. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  8. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  9. Development of the SEASIS instrument for SEDSAT

    NASA Technical Reports Server (NTRS)

    Maier, Mark W.

    1996-01-01

    Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.

  10. China Dust

    Atmospheric Science Data Center

    2013-04-16

    ... SpectroRadiometer (MISR) nadir-camera images of eastern China compare a somewhat hazy summer view from July 9, 2000 (left) with a ... arid and sparsely vegetated surfaces of Mongolia and western China pick up large quantities of yellow dust. Airborne dust clouds from the ...

  11. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.

  12. Development of a camera casing suited for cryogenic and vacuum applications

    NASA Astrophysics Data System (ADS)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  13. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    ERIC Educational Resources Information Center

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  14. Noncontact imaging of plethysmographic pulsation and spontaneous low-frequency oscillation in skin perfusion with a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa

    2016-03-01

    A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.

  15. Implications of atmospheric conditions for analysis of surface temperature variability derived from landscape-scale thermography.

    PubMed

    Hammerle, Albin; Meier, Fred; Heinl, Michael; Egger, Angelika; Leitinger, Georg

    2017-04-01

    Thermal infrared (TIR) cameras perfectly bridge the gap between (i) on-site measurements of land surface temperature (LST) providing high temporal resolution at the cost of low spatial coverage and (ii) remotely sensed data from satellites that provide high spatial coverage at relatively low spatio-temporal resolution. While LST data from satellite (LST sat ) and airborne platforms are routinely corrected for atmospheric effects, such corrections are barely applied for LST from ground-based TIR imagery (using TIR cameras; LST cam ). We show the consequences of neglecting atmospheric effects on LST cam of different vegetated surfaces at landscape scale. We compare LST measured from different platforms, focusing on the comparison of LST data from on-site radiometry (LST osr ) and LST cam using a commercially available TIR camera in the region of Bozen/Bolzano (Italy). Given a digital elevation model and measured vertical air temperature profiles, we developed a multiple linear regression model to correct LST cam data for atmospheric influences. We could show the distinct effect of atmospheric conditions and related radiative processes along the measurement path on LST cam , proving the necessity to correct LST cam data on landscape scale, despite their relatively low measurement distances compared to remotely sensed data. Corrected LST cam data revealed the dampening effect of the atmosphere, especially at high temperature differences between the atmosphere and the vegetated surface. Not correcting for these effects leads to erroneous LST estimates, in particular to an underestimation of the heterogeneity in LST, both in time and space. In the most pronounced case, we found a temperature range extension of almost 10 K.

  16. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  17. Fisheye image rectification using spherical and digital distortion models

    NASA Astrophysics Data System (ADS)

    Li, Xin; Pi, Yingdong; Jia, Yanling; Yang, Yuhui; Chen, Zhiyong; Hou, Wenguang

    2018-02-01

    Fisheye cameras have been widely used in many applications including close range visual navigation and observation and cyber city reconstruction because its field of view is much larger than that of a common pinhole camera. This means that a fisheye camera can capture more information than a pinhole camera in the same scenario. However, the fisheye image contains serious distortion, which may cause trouble for human observers in recognizing the objects within. Therefore, in most practical applications, the fisheye image should be rectified to a pinhole perspective projection image to conform to human cognitive habits. The traditional mathematical model-based methods cannot effectively remove the distortion, but the digital distortion model can reduce the image resolution to some extent. Considering these defects, this paper proposes a new method that combines the physical spherical model and the digital distortion model. The distortion of fisheye images can be effectively removed according to the proposed approach. Many experiments validate its feasibility and effectiveness.

  18. An automated digital imaging system for environmental monitoring applications

    USGS Publications Warehouse

    Bogle, Rian; Velasco, Miguel; Vogel, John

    2013-01-01

    Recent improvements in the affordability and availability of high-resolution digital cameras, data loggers, embedded computers, and radio/cellular modems have advanced the development of sophisticated automated systems for remote imaging. Researchers have successfully placed and operated automated digital cameras in remote locations and in extremes of temperature and humidity, ranging from the islands of the South Pacific to the Mojave Desert and the Grand Canyon. With the integration of environmental sensors, these automated systems are able to respond to local conditions and modify their imaging regimes as needed. In this report we describe in detail the design of one type of automated imaging system developed by our group. It is easily replicated, low-cost, highly robust, and is a stand-alone automated camera designed to be placed in remote locations, without wireless connectivity.

  19. Restoration of hot pixels in digital imagers using lossless approximation techniques

    NASA Astrophysics Data System (ADS)

    Hadar, O.; Shleifer, A.; Cohen, E.; Dotan, Y.

    2015-09-01

    During the last twenty years, digital imagers have spread into industrial and everyday devices, such as satellites, security cameras, cell phones, laptops and more. "Hot pixels" are the main defects in remote digital cameras. In this paper we prove an improvement of existing restoration methods that use (solely or as an auxiliary tool) some average of the surrounding single pixel, such as the method of the Chapman-Koren study 1,2. The proposed method uses the CALIC algorithm and adapts it to a full use of the surrounding pixels.

  20. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  1. Digital In, Digital Out: Digital Editing with Firewire.

    ERIC Educational Resources Information Center

    Doyle, Bob; Sauer, Jeff

    1997-01-01

    Reviews linear and nonlinear digital video (DV) editing equipment and software, using the IEEE 1394 (FireWire) connector. Includes a chart listing specifications and rating eight DV editing systems, reviews two DV still-photo cameras, and previews beta DV products. (PEN)

  2. An Infrared Camera Simulation for Estimating Spatial Temperature Profiles and Signal-to-Noise Ratios of an Airborne Laser-Illuminated Target

    DTIC Science & Technology

    2007-06-01

    of SNR, she incorporated the effects that an InGaAs photovoltaic detector have in producing the signal along with the photon, Johnson, and shot noises ...the photovoltaic FPA detector modeled? • What detector noise sources limit the computed signal? 3.1 Modeling Methodology Two aspects in the IR camera...Another shot noise source in photovoltaic detectors is dark current. This current represents the current flowing in the detector when no optical radiation

  3. Investigation of fugitive emissions from petrochemical transport barges using optical remote sensing

    EPA Science Inventory

    Recent airborne remote sensing survey data acquired with passive gas imaging equipment (PGIE), in this case infrared cameras, have shown potentially significant fugitive volatile organic carbon (VOC) emissions from petrochemical transport barges. The experiment found remote sens...

  4. Digital photography

    PubMed Central

    Windsor, J S; Rodway, G W; Middleton, P M; McCarthy, S

    2006-01-01

    Objective The emergence of a new generation of “point‐and‐shoot” digital cameras offers doctors a compact, portable and user‐friendly solution to the recording of highly detailed digital photographs and video images. This work highlights the use of such technology, and provides information for those who wish to record, store and display their own medical images. Methods Over a 3‐month period, a digital camera was carried by a doctor in a busy, adult emergency department and used to record a range of clinical images that were subsequently transferred to a computer database. Results In total, 493 digital images were recorded, of which 428 were photographs and 65 were video clips. These were successfully used for teaching purposes, publications and patient records. Conclusions This study highlights the importance of informed consent, the selection of a suitable package of digital technology and the role of basic photographic technique in developing a successful digital database in a busy clinical environment. PMID:17068281

  5. Operative record using intraoperative digital data in neurosurgery.

    PubMed

    Houkin, K; Kuroda, S; Abe, H

    2000-01-01

    The purpose of this study was to develop a new method for more efficient and accurate operative records using intra-operative digital data in neurosurgery, including macroscopic procedures and microscopic procedures under an operating microscope. Macroscopic procedures were recorded using a digital camera and microscopic procedures were also recorded using a microdigital camera attached to an operating microscope. Operative records were then recorded digitally and filed in a computer using image retouch software and database base software. The time necessary for editing of the digital data and completing the record was less than 30 minutes. Once these operative records are digitally filed, they are easily transferred and used as database. Using digital operative records along with digital photography, neurosurgeons can document their procedures more accurately and efficiently than by the conventional method (handwriting). A complete digital operative record is not only accurate but also time saving. Construction of a database, data transfer and desktop publishing can be achieved using the intra-operative data, including intra-operative photographs.

  6. Analysis of airborne LiDAR surveys to quantify the characteristic morphologies of northern forested wetlands

    Treesearch

    Murray C. Richardson; Carl P. J. Mitchell; Brian A. Branfireun; Randall K. Kolka

    2010-01-01

    A new technique for quantifying the geomorphic form of northern forested wetlands from airborne LiDAR surveys is introduced, demonstrating the unprecedented ability to characterize the geomorphic form of northern forested wetlands using high-resolution digital topography. Two quantitative indices are presented, including the lagg width index (LWI) which objectively...

  7. Airborne laser altimetry and multispectral imagery for modeling Golden-cheeked Warbler (Setophaga chrysoparia) density

    Treesearch

    Steven E. Sesnie; James M. Mueller; Sarah E. Lehnen; Scott M. Rowin; Jennifer L. Reidy; Frank R. Thompson

    2016-01-01

    Robust models of wildlife population size, spatial distribution, and habitat relationships are needed to more effectively monitor endangered species and prioritize habitat conservation efforts. Remotely sensed data such as airborne laser altimetry (LiDAR) and digital color infrared (CIR) aerial photography combined with well-designed field studies can help fill these...

  8. Compressed sensing: Radar signal detection and parameter measurement for EW applications

    NASA Astrophysics Data System (ADS)

    Rao, M. Sreenivasa; Naik, K. Krishna; Reddy, K. Maheshwara

    2016-09-01

    State of the art system development is very much required for UAVs (Unmanned Aerial Vehicle) and other airborne applications, where miniature, lightweight and low-power specifications are essential. Currently, the airborne Electronic Warfare (EW) systems are developed with digital receiver technology using Nyquist sampling. The detection of radar signals and parameter measurement is a necessary requirement in EW digital receivers. The Random Modulator Pre-Integrator (RMPI) can be used for matched detection of signals using smashed filter. RMPI hardware eliminates the high sampling rate analog to digital computer and reduces the number of samples using random sampling and detection of sparse orthonormal basis vectors. RMPI explore the structural and geometrical properties of the signal apart from traditional time and frequency domain analysis for improved detection. The concept has been proved with the help of MATLAB and LabVIEW simulations.

  9. Helms in FGB/Zarya with cameras

    NASA Image and Video Library

    2001-06-08

    ISS002-E-6526 (8 June 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). The image was recorded with a digital still camera.

  10. Reflex-free digital fundus photography using a simple and portable camera adaptor system. A viable alternative.

    PubMed

    Pirie, Chris G; Pizzirani, Stefano

    2011-12-01

    To describe a digital single lens reflex (dSLR) camera adaptor for posterior segment photography. A total of 30 normal canine and feline animals were imaged using a dSLR adaptor which mounts between a dSLR camera body and lens. Posterior segment viewing and imaging was performed with the aid of an indirect lens ranging from 28-90D. Coaxial illumination for viewing was provided by a single white light emitting diode (LED) within the adaptor, while illumination during exposure was provided by the pop-up flash or an accessory flash. Corneal and/or lens reflections were reduced using a pair of linear polarizers, having their azimuths perpendicular to one another. Quality high-resolution, reflection-free, digital images of the retina were obtained. Subjective image evaluation demonstrated the same amount of detail, as compared to a conventional fundus camera. A wide range of magnification(s) [1.2-4X] and/or field(s) of view [31-95 degrees, horizontal] were obtained by altering the indirect lens utilized. The described adaptor may provide an alternative to existing fundus camera systems. Quality images were obtained and the adapter proved to be versatile, portable and of low cost.

  11. Integrated calibration between digital camera and laser scanner from mobile mapping system for land vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang

    The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.

  12. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    NASA Astrophysics Data System (ADS)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  13. Connecting Digital Repeat Photography to Ecosystem Fluxes in Inland Pacific Northwest, US Cropping Systems

    NASA Astrophysics Data System (ADS)

    Russell, E.; Chi, J.; Waldo, S.; Pressley, S. N.; Lamb, B. K.; Pan, W.

    2017-12-01

    Diurnal and seasonal gas fluxes vary by crop growth stage. Digital cameras are increasingly being used to monitor inter-annual changes in vegetation phenology in a variety of ecosystems. These cameras are not designed as scientific instruments but the information they gather can add value to established measurement techniques (i.e. eddy covariance). This work combined deconstructed digital images with eddy covariance data from five agricultural sites (1 fallow, 4 cropped) in the inland Pacific Northwest, USA. The data were broken down with respect to crop stage and management activities. The fallow field highlighted the camera response to changing net radiation, illumination, and rainfall. At the cropped sites, the net ecosystem exchange, gross primary production, and evapotranspiration were correlated with the greenness and redness values derived from the images over the growing season. However, the color values do not change quickly enough to respond to day-to-day variability in the flux exchange as the two measurement types are based on different processes. The management practices and changes in phenology through the growing season were not visible within the camera data though the camera did capture the general evolution of the ecosystem fluxes.

  14. USGS QA Plan: Certification of digital airborne mapping products

    USGS Publications Warehouse

    Christopherson, J.

    2007-01-01

    To facilitate acceptance of new digital technologies in aerial imaging and mapping, the US Geological Survey (USGS) and its partners have launched a Quality Assurance (QA) Plan for Digital Aerial Imagery. This should provide a foundation for the quality of digital aerial imagery and products. It introduces broader considerations regarding processes employed by aerial flyers in collecting, processing and delivering data, and provides training and information for US producers and users alike.

  15. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  16. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  17. Measuring the Orbital Period of the Moon Using a Digital Camera

    ERIC Educational Resources Information Center

    Hughes, Stephen W.

    2006-01-01

    A method of measuring the orbital velocity of the Moon around the Earth using a digital camera is described. Separate images of the Moon and stars taken 24 hours apart were loaded into Microsoft PowerPoint and the centre of the Moon marked on each image. Four stars common to both images were connected together to form a "home-made" constellation.…

  18. Observation of Planetary Motion Using a Digital Camera

    ERIC Educational Resources Information Center

    Meyn, Jan-Peter

    2008-01-01

    A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…

  19. Accuracy assessment of TanDEM-X IDEM using airborne LiDAR on the area of Poland

    NASA Astrophysics Data System (ADS)

    Woroszkiewicz, Małgorzata; Ewiak, Ireneusz; Lulkowska, Paulina

    2017-06-01

    The TerraSAR-X add-on for Digital Elevation Measurement (TanDEM-X) mission launched in 2010 is another programme - after the Shuttle Radar Topography Mission (SRTM) in 2000 - that uses space-borne radar interferometry to build a global digital surface model. This article presents the accuracy assessment of the TanDEM-X intermediate Digital Elevation Model (IDEM) provided by the German Aerospace Center (DLR) under the project "Accuracy assessment of a Digital Elevation Model based on TanDEM-X data" for the southwestern territory of Poland. The study area included: open terrain, urban terrain and forested terrain. Based on a set of 17,498 reference points acquired by airborne laser scanning, the mean errors of average heights and standard deviations were calculated for areas with a terrain slope below 2 degrees, between 2 and 6 degrees and above 6 degrees. The absolute accuracy of the IDEM data for the analysed area, expressed as a root mean square error (Total RMSE), was 0.77 m.

  20. Concept design of an 80-dual polarization element cryogenic phased array camera for the Arecibo Radio Telescope

    NASA Astrophysics Data System (ADS)

    Cortes-Medellin, German; Parshley, Stephen; Campbell, Donald B.; Warnick, Karl F.; Jeffs, Brian D.; Ganesh, Rajagopalan

    2016-08-01

    This paper presents the current concept design for ALPACA (Advanced L-Band Phased Array Camera for Arecibo) an L-Band cryo-phased array instrument proposed for the 305 m radio telescope of Arecibo. It includes the cryogenically cooled front-end with 160 low noise amplifiers, a RF-over-fiber signal transport and a digital beam former with an instantaneous bandwidth of 312.5 MHz per channel. The camera will digitally form 40 simultaneous beams inside the available field of view of the Arecibo telescope optics, with an expected system temperature goal of 30 K.

  1. [Medical and dental digital photography. Choosing a cheap and user-friendly camera].

    PubMed

    Chossegros, C; Guyot, L; Mantout, B; Cheynet, F; Olivi, P; Blanc, J-L

    2010-04-01

    Digital photography is more and more important in our everyday medical practice. Patient data, medico-legal proof, remote diagnosis, forums, and medical publications are some of the applications of digital photography in medical and dental fields. A lot of small, light, and cheap cameras are on the market. The main issue is to obtain good, reproducible, cheap, and easy-to-shoot pictures. Every medical situation, portrait in esthetic surgery, skin photography in dermatology, X-ray pictures or intra-oral pictures, for example, has its own requirements. For these reasons, we have tried to find an "ideal" compact digital camera. The Sony DSC-T90 (and its T900 counterpart with a wider screen) seems a good choice. Its small size makes it usable in every situation and its price is low. An external light source and a free photo software (XnView((R))) can be useful complementary tools. The main adjustments and expected results are discussed.

  2. Radiation Hardening of Digital Color CMOS Camera-on-a-Chip Building Blocks for Multi-MGy Total Ionizing Dose Environments

    NASA Astrophysics Data System (ADS)

    Goiffon, Vincent; Rolando, Sébastien; Corbière, Franck; Rizzolo, Serena; Chabane, Aziouz; Girard, Sylvain; Baer, Jérémy; Estribeau, Magali; Magnan, Pierre; Paillet, Philippe; Van Uffelen, Marco; Mont Casellas, Laura; Scott, Robin; Gaillardin, Marc; Marcandella, Claude; Marcelot, Olivier; Allanche, Timothé

    2017-01-01

    The Total Ionizing Dose (TID) hardness of digital color Camera-on-a-Chip (CoC) building blocks is explored in the Multi-MGy range using 60Co gamma-ray irradiations. The performances of the following CoC subcomponents are studied: radiation hardened (RH) pixel and photodiode designs, RH readout chain, Color Filter Arrays (CFA) and column RH Analog-to-Digital Converters (ADC). Several radiation hardness improvements are reported (on the readout chain and on dark current). CFAs and ADCs degradations appear to be very weak at the maximum TID of 6 MGy(SiO2), 600 Mrad. In the end, this study demonstrates the feasibility of a MGy rad-hard CMOS color digital camera-on-a-chip, illustrated by a color image captured after 6 MGy(SiO2) with no obvious degradation. An original dark current reduction mechanism in irradiated CMOS Image Sensors is also reported and discussed.

  3. An Innovative Procedure for Calibration of Strapdown Electro-Optical Sensors Onboard Unmanned Air Vehicles

    PubMed Central

    Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio

    2010-01-01

    This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559

  4. Recent technology and usage of plastic lenses in image taking objectives

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko

    2005-09-01

    Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.

  5. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  6. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test

    PubMed Central

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-01-01

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930

  7. Multiple-Agent Air/Ground Autonomous Exploration Systems

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.

    2007-01-01

    Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.

  8. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  9. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  10. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  11. DIGITAL CARTOGRAPHY OF THE PLANETS: NEW METHODS, ITS STATUS, AND ITS FUTURE.

    USGS Publications Warehouse

    Batson, R.M.

    1987-01-01

    A system has been developed that establishes a standardized cartographic database for each of the 19 planets and major satellites that have been explored to date. Compilation of the databases involves both traditional and newly developed digital image processing and mosaicking techniques, including radiometric and geometric corrections of the images. Each database, or digital image model (DIM), is a digital mosaic of spacecraft images that have been radiometrically and geometrically corrected and photometrically modeled. During compilation, ancillary data files such as radiometric calibrations and refined photometric values for all camera lens and filter combinations and refined camera-orientation matrices for all images used in the mapping are produced.

  12. The different ways to obtain digital images of urine microscopy findings: Their advantages and limitations.

    PubMed

    Fogazzi, G B; Garigali, G

    2017-03-01

    We describe three ways to take digital images of urine sediment findings. Way 1 encompasses a digital camera permanently mounted on the microscope and connected with a computer equipped with a proprietary software to acquire, process and store the images. Way 2 is based on the use of inexpensive compact digital cameras, held by hands - or mounted on a tripod - close to one eyepiece of the microscope. Way 3 is based on the use of smartphones, held by hands close to one eyepiece of the microscope or connected to the microscope by an adapter. The procedures, advantages and limitations of each way are reported. Copyright © 2017. Published by Elsevier B.V.

  13. Digital video system for on-line portal verification

    NASA Astrophysics Data System (ADS)

    Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott

    1990-07-01

    A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.

  14. Status and Directions of Insect Resistance Monitoring Project

    EPA Science Inventory

    EPA has conducted research since 2004 to investigate the use of remote images to detect pest infestation from a hyperspectral airborne camera. Results from the 2008 field research have shown that pest infestation effects can be detected without foreknowledge of field assessed con...

  15. Remote identification of individual volunteer cotton plants

    USDA-ARS?s Scientific Manuscript database

    Although airborne multispectral remote sensing can identify fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants that can similarly provide habitat for boll weevils. However, when consumer-grade cameras are used, each pix...

  16. Color constancy by characterization of illumination chromaticity

    NASA Astrophysics Data System (ADS)

    Nikkanen, Jarno T.

    2011-05-01

    Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.

  17. Camera Concepts for the Advanced Gamma-Ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Nepomuk Otte, Adam

    2009-05-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. The incorporation of trigger electronics and signal digitization into the camera are under study. Given the size of AGIS, the camera must be reliable, robust, and cost effective. We are investigating several directions that include innovative technologies such as Geiger-mode avalanche-photodiodes as a possible detector and switched capacitor arrays for the digitization.

  18. Frequently Asked Questions about Digital Mammography

    MedlinePlus

    ... in digital cameras, which convert x-rays into electrical signals. The electrical signals are used to produce images of the ... DBT? Digital breast tomosynthesis is a relatively new technology. In DBT, the X-ray tube moves in ...

  19. Geometric correction and digital elevation extraction using multiple MTI datasets

    USGS Publications Warehouse

    Mercier, Jeffrey A.; Schowengerdt, Robert A.; Storey, James C.; Smith, Jody L.

    2007-01-01

    Digital Elevation Models (DEMs) are traditionally acquired from a stereo pair of aerial photographs sequentially captured by an airborne metric camera. Standard DEM extraction techniques can be naturally extended to satellite imagery, but the particular characteristics of satellite imaging can cause difficulties. The spacecraft ephemeris with respect to the ground site during image collects is the most important factor in the elevation extraction process. When the angle of separation between the stereo images is small, the extraction process typically produces measurements with low accuracy, while a large angle of separation can cause an excessive number of erroneous points in the DEM from occlusion of ground areas. The use of three or more images registered to the same ground area can potentially reduce these problems and improve the accuracy of the extracted DEM. The pointing capability of some sensors, such as the Multispectral Thermal Imager (MTI), allows for multiple collects of the same area from different perspectives. This functionality of MTI makes it a good candidate for the implementation of a DEM extraction algorithm using multiple images for improved accuracy. Evaluation of this capability and development of algorithms to geometrically model the MTI sensor and extract DEMs from multi-look MTI imagery are described in this paper. An RMS elevation error of 6.3-meters is achieved using 11 ground test points, while the MTI band has a 5-meter ground sample distance.

  20. The Evaluation of GPS techniques for UAV-based Photogrammetry in Urban Area

    NASA Astrophysics Data System (ADS)

    Yeh, M. L.; Chou, Y. T.; Yang, L. S.

    2016-06-01

    The efficiency and high mobility of Unmanned Aerial Vehicle (UAV) made them essential to aerial photography assisted survey and mapping. Especially for urban land use and land cover, that they often changes, and need UAVs to obtain new terrain data and the new changes of land use. This study aims to collect image data and three dimensional ground control points in Taichung city area with Unmanned Aerial Vehicle (UAV), general camera and Real-Time Kinematic with positioning accuracy down to centimetre. The study area is an ecological park that has a low topography which support the city as a detention basin. A digital surface model was also built with Agisoft PhotoScan, and there will also be a high resolution orthophotos. There will be two conditions for this study, with or without ground control points and both were discussed and compared for the accuracy level of each of the digital surface models. According to check point deviation estimate, the model without ground control points has an average two-dimension error up to 40 centimeter, altitude error within one meter. The GCP-free RTK-airborne approach produces centimeter-level accuracy with excellent to low risk to the UAS operators. As in the case of the model with ground control points, the accuracy of x, y, z coordinates has gone up 54.62%, 49.07%, and 87.74%, and the accuracy of altitude has improved the most.

  1. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  2. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  3. Demosaicing images from colour cameras for digital image correlation

    NASA Astrophysics Data System (ADS)

    Forsey, A.; Gungor, S.

    2016-11-01

    Digital image correlation is not the intended use for consumer colour cameras, but with care they can be successfully employed in such a role. The main obstacle is the sparsely sampled colour data caused by the use of a colour filter array (CFA) to separate the colour channels. It is shown that the method used to convert consumer camera raw files into a monochrome image suitable for digital image correlation (DIC) can have a significant effect on the DIC output. A number of widely available software packages and two in-house methods are evaluated in terms of their performance when used with DIC. Using an in-plane rotating disc to produce a highly constrained displacement field, it was found that the bicubic spline based in-house demosaicing method outperformed the other methods in terms of accuracy and aliasing suppression.

  4. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    PubMed

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  5. Michigan experimental multispectral mapping system: A description of the M7 airborne sensor and its performance

    NASA Technical Reports Server (NTRS)

    Hasell, P. G., Jr.

    1974-01-01

    The development and characteristics of a multispectral band scanner for an airborne mapping system are discussed. The sensor operates in the ultraviolet, visual, and infrared frequencies. Any twelve of the bands may be selected for simultaneous, optically registered recording on a 14-track analog tape recorder. Multispectral imagery recorded on magnetic tape in the aircraft can be laboratory reproduced on film strips for visual analysis or optionally machine processed in analog and/or digital computers before display. The airborne system performance is analyzed.

  6. Demonstration of the CDMA-mode CAOS smart camera.

    PubMed

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  7. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    NASA Astrophysics Data System (ADS)

    Pospisil, J.; Jakubik, P.; Machala, L.

    2005-11-01

    This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.

  8. Digital dental photography. Part 6: camera settings.

    PubMed

    Ahmad, I

    2009-07-25

    Once the appropriate camera and equipment have been purchased, the next considerations involve setting up and calibrating the equipment. This article provides details regarding depth of field, exposure, colour spaces and white balance calibration, concluding with a synopsis of camera settings for a standard dental set-up.

  9. [Results of testing of MINISKAN mobile gamma-ray camera and specific features of its design].

    PubMed

    Utkin, V M; Kumakhov, M A; Blinov, N N; Korsunskiĭ, V N; Fomin, D K; Kolesnikova, N V; Tultaev, A V; Nazarov, A A; Tararukhina, O B

    2007-01-01

    The main results of engineering, biomedical, and clinical testing of MINISKAN mobile gamma-ray camera are presented. Specific features of the camera hardware and software, as well as the main technical specifications, are described. The gamma-ray camera implements a new technology based on reconstructive tomography, aperture encoding, and digital processing of signals.

  10. Algorithms used in the Airborne Lidar Processing System (ALPS)

    USGS Publications Warehouse

    Nagle, David B.; Wright, C. Wayne

    2016-05-23

    The Airborne Lidar Processing System (ALPS) analyzes Experimental Advanced Airborne Research Lidar (EAARL) data—digitized laser-return waveforms, position, and attitude data—to derive point clouds of target surfaces. A full-waveform airborne lidar system, the EAARL seamlessly and simultaneously collects mixed environment data, including submerged, sub-aerial bare earth, and vegetation-covered topographies.ALPS uses three waveform target-detection algorithms to determine target positions within a given waveform: centroid analysis, leading edge detection, and bottom detection using water-column backscatter modeling. The centroid analysis algorithm detects opaque hard surfaces. The leading edge algorithm detects topography beneath vegetation and shallow, submerged topography. The bottom detection algorithm uses water-column backscatter modeling for deeper submerged topography in turbid water.The report describes slant range calculations and explains how ALPS uses laser range and orientation measurements to project measurement points into the Universal Transverse Mercator coordinate system. Parameters used for coordinate transformations in ALPS are described, as are Interactive Data Language-based methods for gridding EAARL point cloud data to derive digital elevation models. Noise reduction in point clouds through use of a random consensus filter is explained, and detailed pseudocode, mathematical equations, and Yorick source code accompany the report.

  11. Generation of high-dynamic range image from digital photo

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Potemin, Igor S.; Zhdanov, Dmitry D.; Wang, Xu-yang; Cheng, Han

    2016-10-01

    A number of the modern applications such as medical imaging, remote sensing satellites imaging, virtual prototyping etc use the High Dynamic Range Image (HDRI). Generally to obtain HDRI from ordinary digital image the camera is calibrated. The article proposes the camera calibration method based on the clear sky as the standard light source and takes sky luminance from CIE sky model for the corresponding geographical coordinates and time. The article considers base algorithms for getting real luminance values from ordinary digital image and corresponding programmed implementation of the algorithms. Moreover, examples of HDRI reconstructed from ordinary images illustrate the article.

  12. Potential and limitations of using digital repeat photography to track structural and physiological phenology in Mediterranean tree-grass ecosystems

    NASA Astrophysics Data System (ADS)

    Luo, Yunpeng; EI-Madany, Tarek; Filippa, Gianluca; Carrara, Arnaud; Cremonese, Edoardo; Galvagno, Marta; Hammer, Tiana; Pérez-Priego, Oscar; Reichstein, Markus; Martín Isabel, Pilar; González Cascón, Rosario; Migliavacca, Mirco

    2017-04-01

    Tree-Grass ecosystems are global widely distributed (16-35% of the land surface). However, its phenology (especially in water-limited areas) has not yet been well characterized and modeled. By using commercial digital cameras, continuous and relatively vast phenology data becomes available, which provides a good opportunity to monitor and develop a robust method used to extract the important phenological events (phenophases). Here we aimed to assess the usability of digital repeat photography for three Tree-Grass Mediterranean ecosystems over two different growing seasons (Majadas del Tietar, Spain) to extract critical phenophases for grass and evergreen broadleaved trees (autumn regreening of grass- Start of growing season; resprouting of tree leaves; senescence of grass - End of growing season), assess their uncertainty, and to correlate them with physiological phenology (i.e. phenology of ecosystem scale fluxes such as Gross Primary Productivity, GPP). We extracted green chromatic coordinates (GCC) and camera based normalized difference vegetation index (Camera-NDVI) from an infrared enabled digital camera using the "Phenopix" R package. Then we developed a novel method to retrieve important phenophases from GCC and Camera-NDVI from various region of interests (ROIs) of the imagery (tree areas, grass, and both - ecosystem) as well as from GPP, which was derived from Eddy Covariance tower in the same experimental site. The results show that, at ecosystem level, phenophases derived from GCC and Camera-NDVI are strongly correlated (R2 = 0.979). Remarkably, we observed that at the end of growing season phenophases derived from GCC were systematically advanced (ca. 8 days) than phenophase from Camera-NDVI. By using the radiative transfer model Soil Canopy Observation Photochemistry and Energy (SCOPE) we demonstrated that this delay is related to the different sensitivity of GCC and NDVI to the fraction of green/dry grass in the canopy, resulting in a systematic higher NDVI during the dry-down of the canopy. Phenophases derived from GCC and Camera-NDVI are correlated with phenophase extracted from GPP across sites and years (R2 =0.966 and 0.976 respectively). For the start of growing season the determination coefficient was higher (R2 =0.89 and 0.98 for GCC vs GPP and Camera-NDVI vs GPP, respectively) than for the end of growing season (R2 =0.75 and 0.70, for GCC and Camera-NDVI, respectively). The statistics obtained using phenophases derived from grass or ecosystem ROI are similar. In contrast, GCC and Camera-NDVI derived from trees ROI are relatively constant and not related to the seasonality of GPP. However, the GCC of tree shows a characteristic peak that is synchronous to leaf flushing in spring assessed using regular Chlorophyll content measurements and automatic dendrometers. Concluding, we first developed a method to derive phenological events of Tree-Grass ecosystems using digital repeat photography, second we demonstrated that the phenology of GPP is strongly dominated by the phenology of grassland layer, third we discussed the uncertainty related to the use of GCC and Camera-NDVI in senescence, and finally we demonstrate the capability of GCC to track in evergreen broadleaved forest crucial phenological events. Our findings confirm digital repeat photography is a vital data source for characterizing phenology in Mediterranean Tree-Grass Ecosystem.

  13. Tension zones of deep-seated rockslides revealed by thermal anomalies and airborne laser scan data

    NASA Astrophysics Data System (ADS)

    Baroň, Ivo; Bečkovský, David; Gajdošík, Juraj; Opálka, Filip; Plan, Lukas; Winkler, Gerhard

    2015-04-01

    Open cracks, tension fractures and crevice caves are important diagnostic features of gravitationally deformed slopes. When the cracks on the upper part of the slope open to the ground surface, they transfer relatively warm and buoyant air from the underground in cold seasons and thus could be detected by the infrared thermography (IRT) as warmer anomalies. Here we present two IRT surveys of deep-seated rockslides in Austria and the Czech Republic. We used thermal imaging cameras Flir and Optris, manipulated manually from the ground surface and also from unmanned aerial vehicle and piloted ultralight-plane platforms. The surveys were conducted during cold days of winter 2014/2015 and early in the morning to avoid the negative effect of direct sunshine. The first study site is the Bad Fischau rockslide in the southern part of the Vienna Basin (Austria). It was firstly identified by the morphostructural analysis of 1-m digital terrain model from the airborne laser scan data. The rockslide is superimposed on, and closely related to the active marginal faults of the Vienna basin, which is a pull apart structure. There is the 80-m-deep Eisenstein Show Cave situated in the southern lateral margin of the rockslide. The cave was originally considered to be purely of hydrothermal (hypogene) karstification; however its specific morphology and position within the detachment zone of the rockslide suggests its relation to gravitational slope-failure. The IRT survey revealed the Eisenstein Cave at the ground surface and also several other open cracks and possible cleft caves along the margins, headscarp, and also within the body of the rockslide. The second surveyed site was the Kněhyně rockslide in the flysch belt of the Outer Western Carpathians in the eastern Czech Republic. This deep-seated translational rockslide formed about eight known pseudokarst crevice caves, which reach up to 57 m in depth. The IRT survey recognized several warm anomalies indicating very deep deformation of the slope. When compared to digital terain model, some of these thermal anomalies suggest large unexplored crack systems deep in the rock-slope failure. As a conclusion we notice that especially when compared to topographic structures visualized on high accuracy digital terrain models, detecting the thermal anomalies could significantly contribute to understanding the subsurface occurrence of the tension fractures and voids within deep-seated rockslide bodies.

  14. Definition and trade-off study of reconfigurable airborne digital computer system organizations

    NASA Technical Reports Server (NTRS)

    Conn, R. B.

    1974-01-01

    A highly-reliable, fault-tolerant reconfigurable computer system for aircraft applications was developed. The development and application reliability and fault-tolerance assessment techniques are described. Particular emphasis is placed on the needs of an all-digital, fly-by-wire control system appropriate for a passenger-carrying airplane.

  15. Optimized algorithm for the spatial nonuniformity correction of an imaging system based on a charge-coupled device color camera.

    PubMed

    de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell

    2007-01-10

    We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.

  16. Using a digital video camera to examine coupled oscillations

    NASA Astrophysics Data System (ADS)

    Greczylo, T.; Debowska, E.

    2002-07-01

    In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.

  17. [A review of atmospheric aerosol research by using polarization remote sensing].

    PubMed

    Guo, Hong; Gu, Xing-Fa; Xie, Dong-Hai; Yu, Tao; Meng, Qing-Yan

    2014-07-01

    In the present paper, aerosol research by using polarization remote sensing in last two decades (1993-2013) was reviewed, including aerosol researches based on POLDER/PARASOL, APS(Aerosol Polarimetry Sensor), Polarized Airborne camera and Ground-based measurements. We emphasize the following three aspects: (1) The retrieval algorithms developed for land and marine aerosol by using POLDER/PARASOL; The validation and application of POLDER/PARASOL AOD, and cross-comparison with AOD of other satellites, such as MODIS AOD. (2) The retrieval algorithms developed for land and marine aerosol by using MICROPOL and RSP/APS. We also introduce the new progress in aerosol research based on The Directional Polarimetric Camera (DPC), which was produced by Anhui Institute of Optics and Fine Mechanics, Chinese Academy of Sciences (CAS). (3) The aerosol retrieval algorithms by using measurements from ground-based instruments, such as CE318-2 and CE318-DP. The retrieval results from spaceborne sensors, airborne camera and ground-based measurements include total AOD, fine-mode AOD, coarse-mode AOD, size distribution, particle shape, complex refractive indices, single scattering albedo, scattering phase function, polarization phase function and AOD above cloud. Finally, based on the research, the authors present the problems and prospects of atmospheric aerosol research by using polarization remote sensing, and provide a valuable reference for the future studies of atmospheric aerosol.

  18. SOFIA Science Instruments: Commissioning, Upgrades and Future Opportunities

    NASA Technical Reports Server (NTRS)

    Smith, Erin C.

    2014-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is the world's largest airborne observatory, featuring a 2.5 meter telescope housed in the aft section of a Boeing 747sp aircraft. SOFIA's current instrument suite includes: FORCAST (Faint Object InfraRed CAmera for the SOFIA Telescope), a 5-40 µm dual band imager/grism spectrometer developed at Cornell University; HIPO (High-speed Imaging Photometer for Occultations), a 0.3-1.1 micron imager built by Lowell Observatory; FLITECAM (First Light Infrared Test Experiment CAMera), a 1-5 micron wide-field imager/grism spectrometer developed at UCLA; FIFI-LS (Far-Infrared Field-Imaging Line Spectrometer), a 42-210 micron IFU grating spectrograph completed by University Stuttgart; and EXES (Echelon-Cross- Echelle Spectrograph), a 5-28 micron high-resolution spectrometer being completed by UC Davis and NASA Ames. A second generation instrument, HAWC+ (Highresolution Airborne Wideband Camera), is a 50-240 micron imager being upgraded at JPL to add polarimetry and new detectors developed at GSFC. SOFIA will continually update its instrument suite with new instrumentation, technology demonstration experiments and upgrades to the existing instrument suite. This paper details instrument capabilities and status as well as plans for future instrumentation, including the call for proposals for 3rd generation SOFIA science instruments.

  19. Gas Concentration Mapping of Arenal Volcano Using AVEMS

    NASA Technical Reports Server (NTRS)

    Diaz, J. Andres; Arkin, C. Richard; Griffin, Timothy P.; Conejo, Elian; Heinrich, Kristel; Soto, Carlomagno

    2005-01-01

    The Airborne Volcanic Emissions Mass Spectrometer (AVEMS) System developed by NASA-Kennedy Space Center and deployed in collaboration with the National Center for Advanced Technology (CENAT) and the University of Costa Rica was used for mapping the volcanic plume of Arenal Volcano, the most active volcano in Costa Rica. The measurements were conducted as part of the second CARTA (Costa Rica Airborne Research and Technology Application) mission conducted in March 2005. The CARTA 2005 mission, involving multiple sensors and agencies, consisted of three different planes collecting data over all of Costa Rica. The WB-57F from NASA collected ground data with a digital camera, an analog photogrametric camera (RC-30), a multispectral scanner (MASTER) and a hyperspectral sensor (HYMAP). The second aircraft, a King Air 200 from DoE, mounted with a LIDAR based instrument, targeted topography mapping and forest density measurements. A smaller third aircraft, a Navajo from Costa Rica, integrated with the AVEMS instrument and designed for real-time measurements of air pollutants from both natural and anthropogenic sources, was flown over the volcanoes. The improved AVEMS system is designed for deployment via aircraft, car or hand-transport. The 85 pound system employs a 200 Da quadrupole mass analyzer, has a volume of 92,000 cubic cm, requires 350 W of power at steady state, can operate up to an altitude of 41,000 feet above sea level (-65 C; 50 torr). The system uses on-board gas bottles on-site calibration and is capable of monitoring and quantifying up to 16 gases simultaneously. The in-situ gas data in this work, consisting of helium, carbon dioxide, sulfur dioxide and acetone, was acquired in conjunction of GPS data which was plotted with the ground imagery, topography and remote sensing data collected by the other instruments, allowing the 3 dimensional visualization of the volcanic plume at Arenal Volcano. The modeling of possible scenarios of Arenal s activity and its direct impact on the surrounding populated areas in now possible with the combined set of data, linking in-situ data with remote sensing data. The study also helps in the understanding of pyroclastic flow behavior in case of a major eruption.

  20. Extraction of Urban Trees from Integrated Airborne Based Digital Image and LIDAR Point Cloud Datasets - Initial Results

    NASA Astrophysics Data System (ADS)

    Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-10-01

    Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.

  1. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  2. Nonvolatile memory chips: critical technology for high-performance recce systems

    NASA Astrophysics Data System (ADS)

    Kaufman, Bruce

    2000-11-01

    Airborne recce systems universally require nonvolatile storage of recorded data. Both present and next generation designs make use of flash memory chips. Flash memory devices are in high volume use for a variety of commercial products ranging form cellular phones to digital cameras. Fortunately, commercial applications call for increasing capacities and fast write times. These parameters are important to the designer of recce recorders. Of economic necessity COTS devices are used in recorders that must perform in military avionics environments. Concurrently, recording rates are moving to $GTR10Gb/S. Thus to capture imagery for even a few minutes of record time, tactically meaningful solid state recorders will require storage capacities in the 100s of Gbytes. Even with memory chip densities at present day 512Mb, such capacities require thousands of chips. The demands on packaging technology are daunting. This paper will consider the differing flash chip architectures, both available and projected and discuss the impact on recorder architecture and performance. Emerging nonvolatile memory technologies, FeRAM AND MIRAM will be reviewed with regard to their potential use in recce recorders.

  3. The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)

    NASA Astrophysics Data System (ADS)

    Kuçak, R. A.; Özdemir, E.; Erol, S.

    2017-05-01

    Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.

  4. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  5. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  6. An Inexpensive Digital Infrared Camera

    ERIC Educational Resources Information Center

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  7. Research on an optoelectronic measurement system of dynamic envelope measurement for China Railway high-speed train

    NASA Astrophysics Data System (ADS)

    Zhao, Ziyue; Gan, Xiaochuan; Zou, Zhi; Ma, Liqun

    2018-01-01

    The dynamic envelope measurement plays very important role in the external dimension design for high-speed train. Recently there is no digital measurement system to solve this problem. This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on. This system consists of several CMOS digital cameras, several luminous targets for measuring, a scale bar, data processing software and a terminal computer. The system has such advantages as large measurement scale, high degree of automation, strong anti-interference ability, noise rejection and real-time measurement. In this paper, we resolve the key technology such as the transformation, storage and calculation of multiple cameras' high resolution digital image. The experimental data show that the repeatability of the system is within 0.02mm and the distance error of the system is within 0.12mm in the whole workspace. This experiment has verified the rationality of the system scheme, the correctness, the precision and effectiveness of the relevant methods.

  8. Estimation of color modification in digital images by CFA pattern change.

    PubMed

    Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-03-10

    Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  9. MS Kavandi with camera in Service Module

    NASA Image and Video Library

    2001-07-16

    STS104-E-5125 (16 July 2001) --- Astronaut Janet L. Kavandi, STS-104 mission specialist, uses a camera as she floats through the Zvezda service module aboard the International Space Station (ISS). The five STS-104 crew members were visiting the orbital outpost to perform various tasks. The image was recorded with a digital still camera.

  10. Camera Calibration with Radial Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  11. Accuracy evaluation of optical distortion calibration by digital image correlation

    NASA Astrophysics Data System (ADS)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  12. Current status of Polish Fireball Network

    NASA Astrophysics Data System (ADS)

    Wiśniewski, M.; Żołądek, P.; Olech, A.; Tyminski, Z.; Maciejewski, M.; Fietkiewicz, K.; Rudawska, R.; Gozdalski, M.; Gawroński, M. P.; Suchodolski, T.; Myszkiewicz, M.; Stolarz, M.; Polakowski, K.

    2017-09-01

    The Polish Fireball Network (PFN) is a project to monitor regularly the sky over Poland in order to detect bright fireballs. In 2016 the PFN consisted of 36 continuously active stations with 57 sensitive analogue video cameras and 7 high resolution digital cameras. In our observations we also use spectroscopic and radio techniques. A PyFN software package for trajectory and orbit determination was developed. The PFN project is an example of successful participation of amateur astronomers who can provide valuable scientific data. The network is coordinated by astronomers from Copernicus Astronomical Centre in Warsaw, Poland. In 2011-2015 the PFN cameras recorded 214,936 meteor events. Using the PFN data and the UFOOrbit software 34,609 trajectories and orbits were calculated. In the following years we are planning intensive modernization of the PFN network including installation of dozens of new digital cameras.

  13. Experimental investigation of strain errors in stereo-digital image correlation due to camera calibration

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan

    2018-03-01

    The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.

  14. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  15. An automated system for whole microscopic image acquisition and analysis.

    PubMed

    Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús

    2014-09-01

    The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented. © 2014 Wiley Periodicals, Inc.

  16. First light on a new fully digital camera based on SiPM for CTA SST-1M telescope

    NASA Astrophysics Data System (ADS)

    della Volpe, Domenico; Al Samarai, Imen; Alispach, Cyril; Bulik, Tomasz; Borkowski, Jerzy; Cadoux, Franck; Coco, Victor; Favre, Yannick; Grudzińska, Mira; Heller, Matthieu; Jamrozy, Marek; Kasperek, Jerzy; Lyard, Etienne; Mach, Emil; Mandat, Dusan; Michałowski, Jerzy; Moderski, Rafal; Montaruli, Teresa; Neronov, Andrii; Niemiec, Jacek; Njoh Ekoume, T. R. S.; Ostrowski, Michal; Paśko, Paweł; Pech, Miroslav; Rajda, Pawel; Rafalski, Jakub; Schovanek, Petr; Seweryn, Karol; Skowron, Krzysztof; Sliusar, Vitalii; Stawarz, Łukasz; Stodulska, Magdalena; Stodulski, Marek; Travnicek, Petr; Troyano Pujadas, Isaac; Walter, Roland; Zagdański, Adam; Zietara, Krzysztof

    2017-08-01

    The Cherenkov Telescope Array (CTA) will explore with unprecedented precision the Universe in the gammaray domain covering an energy range from 50 GeV to more the 300 TeV. To cover such a broad range with a sensitivity which will be ten time better than actual instruments, different types of telescopes are needed: the Large Size Telescopes (LSTs), with a ˜24 m diameter mirror, a Medium Size Telescopes (MSTs), with a ˜12 m mirror and the small size telescopes (SSTs), with a ˜4 m diameter mirror. The single mirror small size telescope (SST-1M), one of the proposed solutions to become part of the small-size telescopes of CTA, will be equipped with an innovative camera. The SST-1M has a Davies-Cotton optical design with a mirror dish of 4 m diameter and focal ratio 1.4 focussing the Cherenkov light produced in atmospheric showers onto a 90 cm wide hexagonal camera providing a FoV of 9 degrees. The camera is an innovative design based on silicon photomultipliers (SiPM ) and adopting a fully digital trigger and readout architecture. The camera features 1296 custom designed large area hexagonal SiPM coupled to hollow optical concentrators to achieve a pixel size of almost 2.4 cm. The SiPM is a custom design developed with Hamamatsu and with its active area of almost 1 cm2 is one of the largest monolithic SiPM existing. Also the optical concentrators are innovative being light funnels made of a polycarbonate substrate coated with a custom designed UV-enhancing coating. The analog signals coming from the SiPM are fed into the fully digital readout electronics, where digital data are processed by high-speed FPGAs both for trigger and readout. The trigger logic, implemented into an Virtex 7 FPGA, uses the digital data to elaborate a trigger decision by matching data against predefined patterns. This approach is extremely flexible and allows improvements and continued evolutions of the system. The prototype camera is being tested in laboratory prior to its installation expected in fall 2017 on the telescope prototype in Krakow (Poland). In this contribution, we will describe the design of the camera and show the performance measured in laboratory.

  17. First Results of Digital Topography Applied to Macromolecular Crystals

    NASA Technical Reports Server (NTRS)

    Lovelace, J.; Soares, A. S.; Bellamy, H.; Sweet, R. M.; Snell, E. H.; Borgstahl, G.

    2004-01-01

    An inexpensive digital CCD camera was used to record X-ray topographs directly from large imperfect crystals of cubic insulin. The topographs recorded were not as detailed as those which can be measured with film or emulsion plates but do show great promise. Six reflections were recorded using a set of finely spaced stills encompassing the rocking curve of each reflection. A complete topographic reflection profile could be digitally imaged in minutes. Interesting and complex internal structure was observed by this technique.The CCD chip used in the camera has anti-blooming circuitry and produced good data quality even when pixels became overloaded.

  18. Digital Photography and Its Impact on Instruction.

    ERIC Educational Resources Information Center

    Lantz, Chris

    Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…

  19. Quantifying seasonal variation of leaf area index using near-infrared digital camera in a rice paddy

    NASA Astrophysics Data System (ADS)

    Hwang, Y.; Ryu, Y.; Kim, J.

    2017-12-01

    Digital camera has been widely used to quantify leaf area index (LAI). Numerous simple and automatic methods have been proposed to improve the digital camera based LAI estimates. However, most studies in rice paddy relied on arbitrary thresholds or complex radiative transfer models to make binary images. Moreover, only a few study reported continuous, automatic observation of LAI over the season in rice paddy. The objective of this study is to quantify seasonal variations of LAI using raw near-infrared (NIR) images coupled with a histogram shape-based algorithm in a rice paddy. As vegetation highly reflects the NIR light, we installed NIR digital camera 1.8 m above the ground surface and acquired unsaturated raw format images at one-hour intervals between 15 to 80 º solar zenith angles over the entire growing season in 2016 (from May to September). We applied a sub-pixel classification combined with light scattering correction method. Finally, to confirm the accuracy of the quantified LAI, we also conducted direct (destructive sampling) and indirect (LAI-2200) manual observations of LAI once per ten days on average. Preliminary results show that NIR derived LAI agreed well with in-situ observations but divergence tended to appear once rice canopy is fully developed. The continuous monitoring of LAI in rice paddy will help to understand carbon and water fluxes better and evaluate satellite based LAI products.

  20. Mitigation of Atmospheric Effects on Imaging Systems

    DTIC Science & Technology

    2004-03-31

    focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted

  1. Thermal remote sensing approach combined with field spectroscopy for detecting underground structures intended for defence and security purposes in Cyprus

    NASA Astrophysics Data System (ADS)

    Melillos, George; Themistocleous, Kyriacos; Hadjimitsis, Diofantos G.

    2018-04-01

    The purpose of this paper is to present the results obtained from unmanned aerial vehicle (UAV) using multispectral with thermal imaging sensors and field spectroscopy campaigns for detecting underground structures. Airborne thermal prospecting is based on the principle that there is a fundamental difference between the thermal characteristics of underground structures and the environment in which they are structure. This study aims to combine the flexibility and low cost of using an airborne drone with the accuracy of the registration of a thermal digital camera. This combination allows the use of thermal prospection for underground structures detection at low altitude with high-resolution information. In addition vegetation indices such as the Normalized Difference Vegetation Index (NDVI) and Simple Ratio (SR), were utilized for the development of a vegetation index-based procedure aiming at the detection of underground military structures by using existing vegetation indices or other in-band algorithms. The measurements were taken at the following test areas such as: (a) vegetation area covered with the vegetation (barley), in the presence of an underground military structure (b) vegetation area covered with the vegetation (barley), in the absence of an underground military structure. It is important to highlight that this research is undertaken at the ERATOSTHENES Research Centre which received funding to be transformed to an EXcellence Research Centre for Earth SurveiLlance and Space-Based MonItoring Of the EnviRonment (Excelsior) from the HORIZON 2020 Widespread-04-2017: Teaming Phase 1(Grant agreement no: 763643).

  2. Digital Elevation Model from Non-Metric Camera in Uas Compared with LIDAR Technology

    NASA Astrophysics Data System (ADS)

    Dayamit, O. M.; Pedro, M. F.; Ernesto, R. R.; Fernando, B. L.

    2015-08-01

    Digital Elevation Model (DEM) data as a representation of surface topography is highly demanded for use in spatial analysis and modelling. Aimed to that issue many methods of acquisition data and process it are developed, from traditional surveying until modern technology like LIDAR. On the other hands, in a past four year the development of Unamend Aerial System (UAS) aimed to Geomatic bring us the possibility to acquire data about surface by non-metric digital camera on board in a short time with good quality for some analysis. Data collectors have attracted tremendous attention on UAS due to possibility of the determination of volume changes over time, monitoring of the breakwaters, hydrological modelling including flood simulation, drainage networks, among others whose support in DEM for proper analysis. The DEM quality is considered as a combination of DEM accuracy and DEM suitability so; this paper is aimed to analyse the quality of the DEM from non-metric digital camera on UAS compared with a DEM from LIDAR corresponding to same geographic space covering 4 km2 in Artemisa province, Cuba. This area is in a frame of urban planning whose need to know the topographic characteristics in order to analyse hydrology behaviour and decide the best place for make roads, building and so on. Base on LIDAR technology is still more accurate method, it offer us a pattern for test DEM from non-metric digital camera on UAS, whose are much more flexible and bring a solution for many applications whose needs DEM of detail.

  3. DETECTION AND IDENTIFICATION OF TOXIC AIR POLLUTANTS USING FIELD PORTABLE AND AIRBORNE REMOTE IMAGING SYSTEMS

    EPA Science Inventory

    Remote sensing technologies are a class of instrument and sensor systems that include laser imageries, imaging spectrometers, and visible to thermal infrared cameras. These systems have been successfully used for gas phase chemical compound identification in a variety of field e...

  4. Using oblique digital photography for alluvial sandbar monitoring and low-cost change detection

    USGS Publications Warehouse

    Tusso, Robert B.; Buscombe, Daniel D.; Grams, Paul E.

    2015-01-01

    The maintenance of alluvial sandbars is a longstanding management interest along the Colorado River in Grand Canyon. Resource managers are interested in both the long-term trend in sandbar condition and the short-term response to management actions, such as intentional controlled floods released from Glen Canyon Dam. Long-term monitoring is accomplished at a range of scales, by a combination of annual topographic survey at selected sites, daily collection of images from those sites using novel, autonomously operating, digital camera systems (hereafter referred to as 'remote cameras'), and quadrennial remote sensing of sandbars canyonwide. In this paper, we present results from the remote camera images for daily changes in sandbar topography.

  5. A method for measuring aircraft height and velocity using dual television cameras

    NASA Technical Reports Server (NTRS)

    Young, W. R.

    1977-01-01

    A unique electronic optical technique, consisting of two closed circuit television cameras and timing electronics, was devised to measure an aircraft's horizontal velocity and height above ground without the need for airborne cooperative devices. The system is intended to be used where the aircraft has a predictable flight path and a height of less than 660 meters (2,000 feet) at or near the end of an air terminal runway, but is suitable for greater aircraft altitudes whenever the aircraft remains visible. Two television cameras, pointed at zenith, are placed in line with the expected path of travel of the aircraft. Velocity is determined by measuring the time it takes the aircraft to travel the measured distance between cameras. Height is determined by correlating this speed with the time required to cross the field of view of either camera. Preliminary tests with a breadboard version of the system and a small model aircraft indicate the technique is feasible.

  6. Calibration and Validation of Airborne InSAR Geometric Model

    NASA Astrophysics Data System (ADS)

    Chunming, Han; huadong, Guo; Xijuan, Yue; Changyong, Dou; Mingming, Song; Yanbing, Zhang

    2014-03-01

    The image registration or geo-coding is a very important step for many applications of airborne interferometric Synthetic Aperture Radar (InSAR), especially for those involving Digital Surface Model (DSM) generation, which requires an accurate knowledge of the geometry of the InSAR system. While the trajectory and attitude instabilities of the aircraft introduce severe distortions in three dimensional (3-D) geometric model. The 3-D geometrical model of an airborne SAR image depends on the SAR processor itself. Working at squinted model, i.e., with an offset angle (squint angle) of the radar beam from broadside direction, the aircraft motion instabilities may produce distortions in airborne InSAR geometric relationship, which, if not properly being compensated for during SAR imaging, may damage the image registration. The determination of locations of the SAR image depends on the irradiated topography and the exact knowledge of all signal delays: range delay and chirp delay (being adjusted by the radar operator) and internal delays which are unknown a priori. Hence, in order to obtain reliable results, these parameters must be properly calibrated. An Airborne InSAR mapping system has been developed by the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS) to acquire three-dimensional geo-spatial data with high resolution and accuracy. To test the performance of the InSAR system, the Validation/Calibration (Val/Cal) campaign has carried out in Sichun province, south-west China, whose results will be reported in this paper.

  7. Use of airborne near-infrared LiDAR for determining channel cross-section characteristics and monitoring aquatic habitat in Pacific Northwest rivers: A preliminary analysis [Chapter 6

    Treesearch

    Russell N. Faux; John M. Buffington; M. German Whitley; Steve H. Lanigan; Brett B. Roper

    2009-01-01

    Aquatic habitat monitoring is being conducted by numerous organizations in many parts of the Pacific Northwest to document physical and biological conditions of stream reaches as part of legal- and policy-mandated environmental assessments. Remote sensing using discrete-return, near-infrared, airborne LiDAR (Light Detection and Ranging) and high-resolution digital...

  8. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Gyrocopter-Based Remote Sensing Platform

    NASA Astrophysics Data System (ADS)

    Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.

    2015-04-01

    In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.

  10. Improved stereo matching applied to digitization of greenhouse plants

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng

    2015-03-01

    The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.

  11. Agreement and reading time for differently-priced devices for the digital capture of X-ray films.

    PubMed

    Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés

    2012-03-01

    We assessed the reliability of three digital capture devices: a film digitizer (which cost US $15,000), a flat-bed scanner (US $1800) and a digital camera (US $450). Reliability was measured as the agreement between six observers when reading images acquired from a single device and also in terms of the pair-device agreement. The images were 136 chest X-ray cases. The variables measured were the interstitial opacities distribution, interstitial patterns, nodule size and percentage pneumothorax size. The agreement between the six readers when reading images acquired from a single device was similar for the three devices. The pair-device agreements were moderate for all variables. There were significant differences in reading-time between devices: the mean reading-time for the film digitizer was 93 s, it was 59 s for the flat-bed scanner and 70 s for the digital camera. Despite the differences in their cost, there were no substantial differences in the performance of the three devices.

  12. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.

  13. Focusing and depth of field in photography: application in dermatology practice.

    PubMed

    Taheri, Arash; Yentzer, Brad A; Feldman, Steven R

    2013-11-01

    Conventional photography obtains a sharp image of objects within a given 'depth of field'; objects not within the depth of field are out of focus. In recent years, digital photography revolutionized the way pictures are taken, edited, and stored. However, digital photography does not result in a deeper depth of field or better focusing. In this article, we briefly review the concept of depth of field and focus in photography as well as new technologies in this area. A deep depth of field is used to have more objects in focus; a shallow depth of field can emphasize a subject by blurring the foreground and background objects. The depth of field can be manipulated by adjusting the aperture size of the camera, with smaller apertures increasing the depth of field at the cost of lower levels of light capture. Light-field cameras are a new generation of digital cameras that offer several new features, including the ability to change the focus on any object in the image after taking the photograph. Understanding depth of field and camera technology helps dermatologists to capture their subjects in focus more efficiently. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Calculation for simulation of archery goal value using a web camera and ultrasonic sensor

    NASA Astrophysics Data System (ADS)

    Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti

    2017-08-01

    Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.

  15. MS Walheim poses with a Hasselblad camera on the flight deck of Atlantis during STS-110

    NASA Image and Video Library

    2002-04-08

    STS110-E-5017 (8 April 2002) --- Astronaut Rex J. Walheim, STS-110 mission specialist, holds a camera on the aft flight deck of the Space Shuttle Atlantis. A blue and white Earth is visible through the overhead windows of the orbiter. The image was taken with a digital still camera.

  16. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  17. Detecting trends in regional ecosystem functioning: the importance of field data for calibrating and validating NEON airborne remote sensing instruments and science data products

    NASA Astrophysics Data System (ADS)

    McCorkel, J.; Kuester, M. A.; Johnson, B. R.; Krause, K.; Kampe, T. U.; Moore, D. J.

    2011-12-01

    The National Ecological Observatory Network (NEON) is a research facility under development by the National Science Foundation to improve our understanding of and ability to forecast the impacts of climate change, land-use change, and invasive species on ecology. The infrastructure, designed to operate over 30 years or more, includes site-based flux tower and field measurements, coordinated with airborne remote sensing observations to observe key ecological processes over a broad range of temporal and spatial scales. NEON airborne data on vegetation biochemical, biophysical, and structural properties and on land use and land cover will be captured at 1 to 2 meter resolution by an imaging spectrometer, a small-footprint waveform-LiDAR and a high-resolution digital camera. Annual coverage of the 60 NEON sites and capacity to support directed research flights or respond to unexpected events will require three airborne observation platforms (AOP). The integration of field and airborne data with satellite observations and other national geospatial data for analysis, monitoring and input to ecosystem models will extend NEON observations to regions across the United States not directly sampled by the observatory. The different spatial scales and measurement methods make quantitative comparisons between remote sensing and field data, typically collected over small sample plots (e.g. < 0.2 ha), difficult. New approaches to developing temporal and spatial scaling relationships between these data are necessary to enable validation of airborne and satellite remote sensing data and for incorporation of these data into continental or global scale ecological models. In addition to consideration of the methods used to collect ground-based measurements, careful calibration of the remote sensing instrumentation and an assessment of the accuracy of algorithms used to derive higher-level science data products are needed. Furthermore, long-term consistency of the data collected by all three airborne instrument packages over the NEON sites requires traceability of the calibration to national standards, field-based verification of instrument calibration and stability in the aircraft environment, and an independent assessment of the quality of derived data products. This work describes the development of the calibration laboratory, early evaluation of field-based vicarious calibration, development of scaling relationships, and test flights. Complementary laboratory- and field-based calibration of the AOP in addition to consistency with on-board calibration methods provide confidence that low-level data such as radiance and surface reflectance measurements are accurate and comparable among different sensors. Algorithms that calculate higher-level data products including essential climate variables will be validated against equivalent ground- and satellite-based results. Such a validated data set across multiple spatial and temporal scales is key to enabling ecosystem models to forecast the effects of climate change, land-use change and invasive species on the continental scale.

  18. Digital X-ray camera for quality evaluation three-dimensional topographic reconstruction of single crystals of biological macromolecules

    NASA Technical Reports Server (NTRS)

    Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)

    2008-01-01

    The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.

  19. Explosive Transient Camera (ETC) Program

    DTIC Science & Technology

    1991-10-01

    VOLTAGES 4.- VIDEO OUT CCD CLOCKING UNIT UUPSTAIRS" ELECTRONICS AND ANALOG TO DIGITAL IPR OCECSSER I COMMANDS TO DATA AND STATUS INSTRUMENT INFORMATION I...and transmits digital video and status information to the "downstairs" system. The clocking unit and regulator/driver board are the only CCD dependent...A. 1001, " Video Cam-era’CC’" tandari Piells" (1(P’ll m-norartlum, unpublished). Condon,, J.J., Puckpan, M.A., and Vachalski, J. 1970, A. J., 9U, 1149

  20. Environmental applications utilizing digital aerial imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monday, H.M.

    1995-06-01

    This paper discusses the use of satellite imagery, aerial photography, and computerized airborne imagery as applied to environmental mapping, analysis, and monitoring. A project conducted by the City of Irving, Texas involves compliance with national pollutant discharge elimination system (NPDES) requirements stipulated by the Environmental Protection Agency. The purpose of the project was the development and maintenance of a stormwater drainage utility. Digital imagery was collected for a portion of the city to map the City`s porous and impervious surfaces which will then be overlaid with property boundaries in the City`s existing Geographic information System (GIS). This information will allowmore » the City to determine an equitable tax for each land parcel according to the amount of water each parcel is contributing to the stormwater system. Another project involves environmental compliance for warm water discharges created by utility companies. Environmental consultants are using digital airborne imagery to analyze thermal plume affects as well as monitoring power generation facilities. A third project involves wetland restoration. Due to freeway and other forms of construction, plus a major reduction of fresh water supplies, the Southern California coastal wetlands are being seriously threatened. These wetlands, rich spawning grounds for plant and animal life, are home to thousands of waterfowl and shore birds who use this habitat for nesting and feeding grounds. Under the leadership of Southern California Edison (SCE) and CALTRANS (California Department of Transportation), several wetland areas such as the San Dieguito Lagoon (Del Mar, California), the Sweetwater Marsh (San Diego, California), and the Tijuana Estuary (San Diego, California) are being restored and closely monitored using digital airborne imagery.« less

  1. Evaluation of Digital Technology and Software Use among Business Education Teachers

    ERIC Educational Resources Information Center

    Ellis, Richard S.; Okpala, Comfort O.

    2004-01-01

    Digital video cameras are part of the evolution of multimedia digital products that have positive applications for educators, students, and industry. Multimedia digital video can be utilized by any personal computer and it allows the user to control, combine, and manipulate different types of media, such as text, sound, video, computer graphics,…

  2. Digital Photography and Journals in a Kindergarten-First-Grade Classroom: Toward Meaningful Technology Integration in Early Childhood Education

    ERIC Educational Resources Information Center

    Ching, Cynthia Carter; Wang, X. Christine; Shih, Mei-Li; Kedem, Yore

    2006-01-01

    To explore meaningful and effective technology integration in early childhood education, we investigated how kindergarten-first-grade students created and employed digital photography journals to support social and cognitive reflection. These students used a digital camera to document their daily school activities and created digital photo…

  3. Development and Evaluation of Math Library Routines for a 1750A Airborne Microcomputer.

    DTIC Science & Technology

    1985-12-04

    Since each iteration doubles the number of correct significant digits in the square root, this assures an accuracy of 63.32 bits. (4: 23) The next...X, C1 + C2 represents In (C) to more than working precision This method gives extra digits of precision equivalent to the number of extra digits in...will not underflow for lxI K eps. Cody and Waite have suggested that eps = 2-t/2 where there are t base-2 digits in the significand. The next step

  4. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  5. Determining fast orientation changes of multi-spectral line cameras from the primary images

    NASA Astrophysics Data System (ADS)

    Wohlfeil, Jürgen

    2012-01-01

    Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.

  6. First high speed imaging of lightning from summer thunderstorms over India: Preliminary results based on amateur recording using a digital camera

    NASA Astrophysics Data System (ADS)

    Narayanan, V. L.

    2017-12-01

    For the first time, high speed imaging of lightning from few isolated tropical thunderstorms are observed from India. The recordings are made from Tirupati (13.6oN, 79.4oE, 180 m above mean sea level) during summer months with a digital camera capable of recording high speed videos up to 480 fps. At 480 fps, each individual video file is recorded for 30 s resulting in 14400 deinterlaced images per video file. An automatic processing algorithm is developed for quick identification and analysis of the lightning events which will be discussed in detail. Preliminary results indicating different types of phenomena associated with lightning like stepped leader, dart leader, luminous channels corresponding to continuing current and M components are discussed. While most of the examples show cloud to ground discharges, few interesting cases of intra-cloud, inter-cloud and cloud-air discharges will also be displayed. This indicates that though high speed cameras with few 1000 fps are preferred for a detailed study on lightning, moderate range CMOS sensor based digital cameras can provide important information as well. The lightning imaging activity presented herein is initiated as an amateur effort and currently plans are underway to propose a suite of supporting instruments to conduct coordinated campaigns. The images discussed here are acquired from normal residential area and indicate how frequent lightning strikes are in such tropical locations during thunderstorms, though no towering structures are nearby. It is expected that popularizing of such recordings made with affordable digital cameras will trigger more interest in lightning research and provide a possible data source from amateur observers paving the way for citizen science.

  7. [True color accuracy in digital forensic photography].

    PubMed

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).

  8. Volumetric evolution of Surtsey, Iceland, from topographic maps and scanning airborne laser altimetry

    USGS Publications Warehouse

    Garvin, J.B.; Williams, R.S.; Frawley, J.J.; Krabill, W.B.

    2000-01-01

    The volumetric evolution of Surtsey has been estimated on the basis of digital elevation models derived from NASA scanning airborne laser altimeter surveys (20 July 1998), as well as digitized 1:5,000-scale topographic maps produced by the National Land Survey of Iceland and by Norrman. Subaerial volumes have been computed from co-registered digital elevation models (DEM's) from 6 July 1968, 11 July 1975, 16 July 1993, and 20 July 1998 (scanning airborne laser altimetry), as well as true surface area (above mean sea level). Our analysis suggests that the subaerial volume of Surtsey has been reduced from nearly 0.100 km3 on 6 July 1968 to 0.075 km3 on 20 July 1998. Linear regression analysis of the temporal evolution of Surtsey's subaerial volume indicates that most of its subaerial surface will be at or below mean sea-level by approximately 2100. This assumes a conservative estimate of continuation of the current pace of marine erosion and mass-wasting on the island, including the indurated core of the conduits of the Surtur I and Surtur II eruptive vents. If the conduits are relatively resistant to marine erosion they will become sea stacks after the rest of the island has become a submarine shoal, and some portions of the island could survive for centuries. The 20 July 1998 scanning laser altimeter surveys further indicate rapid enlargement of erosional canyons in the northeastern portion of the partial tephra ring associated with Surtur I. Continued airborne and eventually spaceborne topographic surveys of Surtsey are planned to refine the inter-annual change of its subaerial volume.

  9. Qualification Tests of Micro-camera Modules for Space Applications

    NASA Astrophysics Data System (ADS)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  10. Research in digital adaptive flight controllers

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1976-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.

  11. Preliminary assessment of airborne imaging spectrometer and airborne thematic mapper data acquired for forest decline areas in the Federal Republic of Germany

    NASA Technical Reports Server (NTRS)

    Herrmann, Karin; Ammer, Ulrich; Rock, Barrett; Paley, Helen N.

    1988-01-01

    This study evaluated the utility of data collected by the high-spectral resolution airborne imaging spectrometer (AIS-2, tree mode, spectral range 0.8-2.2 microns) and the broad-band Daedalus airborne thematic mapper (ATM, spectral range 0.42-13.0 micron) in assessing forest decline damage at a predominantly Scotch pine forest in the FRG. Analysis of spectral radiance values from the ATM and raw digital number values from AIS-2 showed that higher reflectance in the near infrared was characteristic of high damage (heavy chlorosis, limited needle loss) in Scotch pine canopies. A classification image of a portion of the AIS-2 flight line agreed very well with a damage assessment map produced by standard aerial photointerpretation techniques.

  12. Airborne Camera System for Real-Time Applications - Support of a National Civil Protection Exercise

    NASA Astrophysics Data System (ADS)

    Gstaiger, V.; Romer, H.; Rosenbaum, D.; Henkel, F.

    2015-04-01

    In the VABENE++ project of the German Aerospace Center (DLR), powerful tools are being developed to aid public authorities and organizations with security responsibilities as well as traffic authorities when dealing with disasters and large public events. One focus lies on the acquisition of high resolution aerial imagery, its fully automatic processing, analysis and near real-time provision to decision makers in emergency situations. For this purpose a camera system was developed to be operated from a helicopter with light-weight processing units and microwave link for fast data transfer. In order to meet end-users' requirements DLR works close together with the German Federal Office of Civil Protection and Disaster Assistance (BBK) within this project. One task of BBK is to establish, maintain and train the German Medical Task Force (MTF), which gets deployed nationwide in case of large-scale disasters. In October 2014, several units of the MTF were deployed for the first time in the framework of a national civil protection exercise in Brandenburg. The VABENE++ team joined the exercise and provided near real-time aerial imagery, videos and derived traffic information to support the direction of the MTF and to identify needs for further improvements and developments. In this contribution the authors introduce the new airborne camera system together with its near real-time processing components and share experiences gained during the national civil protection exercise.

  13. Confocal retinal imaging using a digital light projector with a near infrared VCSEL source

    NASA Astrophysics Data System (ADS)

    Muller, Matthew S.; Elsner, Ann E.

    2018-02-01

    A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1" LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging.

  14. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    NASA Astrophysics Data System (ADS)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  15. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  16. Using digital time-lapse cameras to monitor species-specific understorey and overstorey phenology in support of wildlife habitat assessment.

    PubMed

    Bater, Christopher W; Coops, Nicholas C; Wulder, Michael A; Hilker, Thomas; Nielsen, Scott E; McDermid, Greg; Stenhouse, Gordon B

    2011-09-01

    Critical to habitat management is the understanding of not only the location of animal food resources, but also the timing of their availability. Grizzly bear (Ursus arctos) diets, for example, shift seasonally as different vegetation species enter key phenological phases. In this paper, we describe the use of a network of seven ground-based digital camera systems to monitor understorey and overstorey vegetation within species-specific regions of interest. Established across an elevation gradient in western Alberta, Canada, the cameras collected true-colour (RGB) images daily from 13 April 2009 to 27 October 2009. Fourth-order polynomials were fit to an RGB-derived index, which was then compared to field-based observations of phenological phases. Using linear regression to statistically relate the camera and field data, results indicated that 61% (r (2) = 0.61, df = 1, F = 14.3, p = 0.0043) of the variance observed in the field phenological phase data is captured by the cameras for the start of the growing season and 72% (r (2) = 0.72, df = 1, F = 23.09, p = 0.0009) of the variance in length of growing season. Based on the linear regression models, the mean absolute differences in residuals between predicted and observed start of growing season and length of growing season were 4 and 6 days, respectively. This work extends upon previous research by demonstrating that specific understorey and overstorey species can be targeted for phenological monitoring in a forested environment, using readily available digital camera technology and RGB-based vegetation indices.

  17. Acquisition of airborne imagery in support of Deepwater Horizon oil spill recovery assessments

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.; Muller-Karger, Frank E.

    2012-09-01

    Remote sensing imagery was collected from a low flying aircraft along the near coastal waters of the Florida Panhandle and northern Gulf of Mexico and into Barataria Bay, Louisiana, USA, during March 2011. Imagery was acquired from an aircraft that simultaneously collected traditional photogrammetric film imagery, digital video, digital still images, and digital hyperspectral imagery. The original purpose of the project was to collect airborne imagery to support assessment of weathered oil in littoral areas influenced by the Deepwater Horizon oil and gas spill that occurred during the spring and summer of 2010. This paper describes the data acquired and presents information that demonstrates the utility of small spatial scale imagery to detect the presence of weathered oil along littoral areas in the northern Gulf of Mexico. Flight tracks and examples of imagery collected are presented and methods used to plan and acquire the imagery are described. Results suggest weathered oil in littoral areas after the spill was contained at the source.

  18. Digital outcrop model of stratigraphy and breccias of the southern Franklin Mountains, El Paso, Texas

    USGS Publications Warehouse

    Bellian, Jerome A.; Kerans, Charles; Repetski, John E.; Derby, James R.; Fritz, R.D.; Longacre, S.A.; Morgan, W.A.; Sternbach, C.A.

    2012-01-01

    The breccias of the SFM were previously described as the result of collapsed paleocaves that formed during subaerial exposure related to the Sauk-Tippecanoe unconformity. A new approach in this work uses traditional field mapping combined with high-resolution (1-m [3.3-ft] point spacing) airborne light detection and ranging (LIDAR) data over 24 km2 (9 mi2) to map breccia and relevant stratal surfaces. Airborne LIDAR data were used to create a digital outcrop model of the SFM from which a detailed (1:2000 scale) geologic map was created. The geologic map includes formation, fault, and breccia contacts. The digital outcrop model was used to interpret three-dimensional spatial relationships of breccia bodies with respect to the current understanding of the tectonic and stratigraphic evolution of the SFM. The data presented here are used to discuss potential stratigraphic, temporal, and tectonic controls on the formation of caves within the study area that eventually collapsed to form the breccias currently exposed in outcrop.

  19. Evolution of the SOFIA tracking control system

    NASA Astrophysics Data System (ADS)

    Fiebig, Norbert; Jakob, Holger; Pfüller, Enrico; Röser, Hans-Peter; Wiedemann, Manuel; Wolf, Jürgen

    2014-07-01

    The airborne observatory SOFIA (Stratospheric Observatory for Infrared Astronomy) is undergoing a modernization of its tracking system. This included new, highly sensitive tracking cameras, control computers, filter wheels and other equipment, as well as a major redesign of the control software. The experiences along the migration path from an aged 19" VMbus based control system to the application of modern industrial PCs, from VxWorks real-time operating system to embedded Linux and a state of the art software architecture are presented. Further, the concept is presented to operate the new camera also as a scientific instrument, in parallel to tracking.

  20. s48-e-007

    NASA Image and Video Library

    2013-01-15

    S48-E-007 (12 Sept 1991) --- Astronaut James F. Buchli, mission specialist, catches snack crackers as they float in the weightless environment of the earth-orbiting Discovery. This image was transmitted by the Electronic Still Camera, Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE

  1. Off-axis digital holographic camera for quantitative phase microscopy.

    PubMed

    Monemhaghdoust, Zahra; Montfort, Frédéric; Emery, Yves; Depeursinge, Christian; Moser, Christophe

    2014-06-01

    We propose and experimentally demonstrate a digital holographic camera which can be attached to the camera port of a conventional microscope for obtaining digital holograms in a self-reference configuration, under short coherence illumination and in a single shot. A thick holographic grating filters the beam containing the sample information in two dimensions through diffraction. The filtered beam creates the reference arm of the interferometer. The spatial filtering method, based on the high angular selectivity of the thick grating, reduces the alignment sensitivity to angular displacements compared with pinhole based Fourier filtering. The addition of a thin holographic grating alters the coherence plane tilt introduced by the thick grating so as to create high-visibility interference over the entire field of view. The acquired full-field off-axis holograms are processed to retrieve the amplitude and phase information of the sample. The system produces phase images of cheek cells qualitatively similar to phase images extracted with a standard commercial DHM.

  2. A smartphone photogrammetry method for digitizing prosthetic socket interiors.

    PubMed

    Hernandez, Amaia; Lemaire, Edward

    2017-04-01

    Prosthetic CAD/CAM systems require accurate 3D limb models; however, difficulties arise when working from the person's socket since current 3D scanners have difficulties scanning socket interiors. While dedicated scanners exist, they are expensive and the cost may be prohibitive for a limited number of scans per year. A low-cost and accessible photogrammetry method for socket interior digitization is proposed, using a smartphone camera and cloud-based photogrammetry services. 15 two-dimensional images of the socket's interior are captured using a smartphone camera. A 3D model is generated using cloud-based software. Linear measurements were comparing between sockets and the related 3D models. 3D reconstruction accuracy averaged 2.6 ± 2.0 mm and 0.086 ± 0.078 L, which was less accurate than models obtained by high quality 3D scanners. However, this method would provide a viable 3D digital socket reproduction that is accessible and low-cost, after processing in prosthetic CAD software. Clinical relevance The described method provides a low-cost and accessible means to digitize a socket interior for use in prosthetic CAD/CAM systems, employing a smartphone camera and cloud-based photogrammetry software.

  3. Rapid orthophoto development system.

    DOT National Transportation Integrated Search

    2013-06-01

    The DMC system procured in the project represented state-of-the-art, large-format digital aerial camera systems at the start of : project. DMC is based on the frame camera model, and to achieve large ground coverage with high spatial resolution, the ...

  4. Current instrument status of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)

    NASA Technical Reports Server (NTRS)

    Eastwood, Michael L.; Sarture, Charles M.; Chrien, Thomas G.; Green, Robert O.; Porter, Wallace M.

    1991-01-01

    An upgraded version of AVIRIS, an airborne imaging spectrometer based on a whiskbroom-type scanner coupled via optical fibers to four dispersive spectrometers, that has been in operation since 1987 is described. Emphasis is placed on specific AVIRIS subsystems including foreoptics, fiber optics, and an in-flight reference source; spectrometers and detector dewars; a scan drive mechanism; a signal chain; digital electronics; a tape recorder; calibration systems; and ground support requirements.

  5. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  6. Real-Time Visualization of Tissue Ischemia

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)

    2000-01-01

    A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.

  7. Optical frequency comb profilometry using a single-pixel camera composed of digital micromirror devices.

    PubMed

    Pham, Quang Duc; Hayasaki, Yoshio

    2015-01-01

    We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.

  8. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

    PubMed Central

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.

    2014-01-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  9. A neutron camera system for MAST.

    PubMed

    Cecconello, M; Turnyanskiy, M; Conroy, S; Ericsson, G; Ronchi, E; Sangaroon, S; Akers, R; Fitzgerald, I; Cullen, A; Weiszflog, M

    2010-10-01

    A prototype neutron camera has been developed and installed at MAST as part of a feasibility study for a multichord neutron camera system with the aim to measure the spatial and time resolved 2.45 MeV neutron emissivity profile. Liquid scintillators coupled to a fast digitizer are used for neutron/gamma ray digital pulse shape discrimination. The preliminary results obtained clearly show the capability of this diagnostic to measure neutron emissivity profiles with sufficient time resolution to study the effect of fast ion loss and redistribution due to magnetohydrodynamic activity. A minimum time resolution of 2 ms has been achieved with a modest 1.5 MW of neutral beam injection heating with a measured neutron count rate of a few 100 kHz.

  10. Land cover/use classification of Cairns, Queensland, Australia: A remote sensing study involving the conjunctive use of the airborne imaging spectrometer, the large format camera and the thematic mapper simulator

    NASA Technical Reports Server (NTRS)

    Heric, Matthew; Cox, William; Gordon, Daniel K.

    1987-01-01

    In an attempt to improve the land cover/use classification accuracy obtainable from remotely sensed multispectral imagery, Airborne Imaging Spectrometer-1 (AIS-1) images were analyzed in conjunction with Thematic Mapper Simulator (NS001) Large Format Camera color infrared photography and black and white aerial photography. Specific portions of the combined data set were registered and used for classification. Following this procedure, the resulting derived data was tested using an overall accuracy assessment method. Precise photogrammetric 2D-3D-2D geometric modeling techniques is not the basis for this study. Instead, the discussion exposes resultant spectral findings from the image-to-image registrations. Problems associated with the AIS-1 TMS integration are considered, and useful applications of the imagery combination are presented. More advanced methodologies for imagery integration are needed if multisystem data sets are to be utilized fully. Nevertheless, research, described herein, provides a formulation for future Earth Observation Station related multisensor studies.

  11. Maximizing the Performance of Automated Low Cost All-sky Cameras

    NASA Technical Reports Server (NTRS)

    Bettonvil, F.

    2011-01-01

    Thanks to the wide spread of digital camera technology in the consumer market, a steady increase in the number of active All-sky camera has be noticed European wide. In this paper I look into the details of such All-sky systems and try to optimize the performance in terms of accuracy of the astrometry, the velocity determination and photometry. Having autonomous operation in mind, suggestions are done for the optimal low cost All-sky camera.

  12. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  13. High-frame-rate infrared and visible cameras for test range instrumentation

    NASA Astrophysics Data System (ADS)

    Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1995-09-01

    Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.

  14. Cameras Monitor Spacecraft Integrity to Prevent Failures

    NASA Technical Reports Server (NTRS)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  15. Characterizing Aerosol Distributions and Optical Properties Using the NASA Langley High Spectral Resolution Lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hostetler, Chris; Ferrare, Richard

    The objective of this project was to provide vertically and horizontally resolved data on aerosol optical properties to assess and ultimately improve how models represent these aerosol properties and their impacts on atmospheric radiation. The approach was to deploy the NASA Langley Airborne High Spectral Resolution Lidar (HSRL) and other synergistic remote sensors on DOE Atmospheric Science Research (ASR) sponsored airborne field campaigns and synergistic field campaigns sponsored by other agencies to remotely measure aerosol backscattering, extinction, and optical thickness profiles. Synergistic sensors included a nadir-viewing digital camera for context imagery, and, later in the project, the NASA Goddard Institutemore » for Space Studies (GISS) Research Scanning Polarimeter (RSP). The information from the remote sensing instruments was used to map the horizontal and vertical distribution of aerosol properties and type. The retrieved lidar parameters include profiles of aerosol extinction, backscatter, depolarization, and optical depth. Products produced in subsequent analyses included aerosol mixed layer height, aerosol type, and the partition of aerosol optical depth by type. The lidar products provided vertical context for in situ and remote sensing measurements from other airborne and ground-based platforms employed in the field campaigns and was used to assess the predictions of transport models. Also, the measurements provide a data base for future evaluation of techniques to combine active (lidar) and passive (polarimeter) measurements in advanced retrieval schemes to remotely characterize aerosol microphysical properties. The project was initiated as a 3-year project starting 1 January 2005. It was later awarded continuation funding for another 3 years (i.e., through 31 December 2010) followed by a 1-year no-cost extension (through 31 December 2011). This project supported logistical and flight costs of the NASA sensors on a dedicated aircraft, the subsequent analysis and archival of the data, and the presentation of results in conferences, workshops, and publications. DOE ASR field campaigns supported under this project included - MAX-Mex /MILAGRO (2006) - TexAQS 2006/GoMACCS (2006) - CHAPS (2007) - RACORO (2009) - CARE/CalNex (2010) In addition, data acquired on HSRL airborne field campaigns sponsored by other agencies were used extensively to fulfill the science objectives of this project and the data acquired have been made available to other DOE ASR investigators upon request.« less

  16. Laptop Circulation at Eastern Washington University

    ERIC Educational Resources Information Center

    Munson, Doris; Malia, Elizabeth

    2008-01-01

    In 2001, Eastern Washington University's Libraries began a laptop circulation program with seventeen laptops. Today, there are 150 laptops in the circulation pool, as well as seventeen digital cameras, eleven digital handycams, and thirteen digital projectors. This article explains how the program has grown to its present size, the growing pains…

  17. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    PubMed

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  18. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera

    PubMed Central

    Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791

  19. Digital camera and smartphone as detectors in paper-based chemiluminometric genotyping of single nucleotide polymorphisms.

    PubMed

    Spyrou, Elena M; Kalogianni, Despina P; Tragoulias, Sotirios S; Ioannou, Penelope C; Christopoulos, Theodore K

    2016-10-01

    Chemi(bio)luminometric assays have contributed greatly to various areas of nucleic acid analysis due to their simplicity and detectability. In this work, we present the development of chemiluminometric genotyping methods in which (a) detection is performed by using either a conventional digital camera (at ambient temperature) or a smartphone and (b) a lateral flow assay configuration is employed for even higher simplicity and suitability for point of care or field testing. The genotyping of the C677T single nucleotide polymorphism (SNP) of methylenetetrahydropholate reductase (MTHFR) gene is chosen as a model. The interrogated DNA sequence is amplified by polymerase chain reaction (PCR) followed by a primer extension reaction. The reaction products are captured through hybridization on the sensing areas (spots) of the strip. Streptavidin-horseradish peroxidase conjugate is used as a reporter along with a chemiluminogenic substrate. Detection of the emerging chemiluminescence from the sensing areas of the strip is achieved by digital camera or smartphone. For this purpose, we constructed a 3D-printed smartphone attachment that houses inexpensive lenses and converts the smartphone into a portable chemiluminescence imager. The device enables spatial discrimination of the two alleles of a SNP in a single shot by imaging of the strip, thus avoiding the need of dual labeling. The method was applied successfully to genotyping of real clinical samples. Graphical abstract Paper-based genotyping assays using digital camera and smartphone as detectors.

  20. Network-linked long-time recording high-speed video camera system

    NASA Astrophysics Data System (ADS)

    Kimura, Seiji; Tsuji, Masataka

    2001-04-01

    This paper describes a network-oriented, long-recording-time high-speed digital video camera system that utilizes an HDD (Hard Disk Drive) as a recording medium. Semiconductor memories (DRAM, etc.) are the most common image data recording media with existing high-speed digital video cameras. They are extensively used because of their advantage of high-speed writing and reading of picture data. The drawback is that their recording time is limited to only several seconds because the data amount is very large. A recording time of several seconds is sufficient for many applications. However, a much longer recording time is required in some applications where an exact prediction of trigger timing is hard to make. In the Late years, the recording density of the HDD has been dramatically improved, which has attracted more attention to its value as a long-recording-time medium. We conceived an idea that we would be able to build a compact system that makes possible a long time recording if the HDD can be used as a memory unit for high-speed digital image recording. However, the data rate of such a system, capable of recording 640 X 480 pixel resolution pictures at 500 frames per second (fps) with 8-bit grayscale is 153.6 Mbyte/sec., and is way beyond the writing speed of the commonly used HDD. So, we developed a dedicated image compression system and verified its capability to lower the data rate from the digital camera to match the HDD writing rate.

  1. The study of integration about measurable image and 4D production

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun

    2008-12-01

    In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.

  2. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  3. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  4. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  5. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  6. 32 CFR 813.2 - Sources of VIDOC.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...

  7. [Nitrogen status diagnosis and yield prediction of spring maize after green manure incorporation by using a digital camera].

    PubMed

    Bai, Jin-Shun; Cao, Wei-Dong; Xiong, Jing; Zeng, Nao-Hua; Shimizu, Katshyoshi; Rui, Yu-Kui

    2013-12-01

    In order to explore the feasibility of using the image processing technology to diagnose the nitrogen status and to predict the maize yield, a field experiment with different nitrogen rates with green manure incorporation was conducted. Maize canopy digital images over a range of growth stages were captured by digital camera. Maize nitrogen status and the relationships between image color indices derived by digital camera for maize at different growth stages and maize nitrogen status indicators were analyzed. These digital camera sourced image color indices at different growth stages for maize were also regressed with maize grain yield at maturity. The results showed that the plant nitrogen status for maize was improved by green manure application. The leaf chlorophyll content (SPAD value), aboveground biomass and nitrogen uptake for green manure treatments at different maize growth stages were all higher than that for chemical fertilization treatments. The correlations between spectral indices with plant nitrogen indicators for maize affected by green manure application were weaker than that affected by chemical fertilization. And the correlation coefficients for green manure application were ranged with the maize growth stages changes. The best spectral indices for diagnosis of plant nitrogen status after green manure incorporation were normalized blue value (B/(R+G+B)) at 12-leaf (V12) stage and normalized red value (R/(R+G+B)) at grain-filling (R4) stage individually. The coefficients of determination based on linear regression were 0. 45 and 0. 46 for B/(R+G+B) at V12 stage and R/(R+G+B) at R4 stage respectively, acting as a predictor of maize yield response to nitrogen affected by green manure incorporation. Our findings suggested that digital image technique could be a potential tool for in-season prediction of the nitrogen status and grain yield for maize after green manure incorporation when the suitable growth stages and spectral indices for diagnosis were selected.

  8. Accuracy analysis for DSM and orthoimages derived from SPOT HRS stereo data using direct georeferencing

    NASA Astrophysics Data System (ADS)

    Reinartz, Peter; Müller, Rupert; Lehner, Manfred; Schroeder, Manfred

    During the HRS (High Resolution Stereo) Scientific Assessment Program the French space agency CNES delivered data sets from the HRS camera system with high precision ancillary data. Two test data sets from this program were evaluated: one is located in Germany, the other in Spain. The first goal was to derive orthoimages and digital surface models (DSM) from the along track stereo data by applying the rigorous model with direct georeferencing and without ground control points (GCPs). For the derivation of DSM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera was used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data from positioning and attitude systems were extracted. A dense image matching, using nearly all pixels as kernel centers provided the parallaxes. The quality of the stereo tie points was controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection lead to points in object space which are subsequently interpolated to a DSM in a regular grid. DEM filtering methods were also applied and evaluations carried out differentiating between accuracies in forest and other areas. Additionally, orthoimages were generated from the images of the two stereo looking directions. The orthoimage and DSM accuracy was determined by using GCPs and available reference DEMs of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). As expected the results obtained without using GCPs showed a bias in the order of 5-20 m to the reference data for all three coordinates. By image matching it could be shown that the two independently derived orthoimages exhibit a very constant shift behavior. In a second step few GCPs (3-4) were used to calculate boresight alignment angles, introduced into the direct georeferencing process of each image independently. This method improved the absolute accuracy of the resulting orthoimages and DSM significantly.

  9. Traceable Calibration, Performance Metrics, and Uncertainty Estimates of Minirhizotron Digital Imagery for Fine-Root Measurements

    PubMed Central

    Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward

    2014-01-01

    Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023

  10. Digital holographic interferometry for characterizing deformable mirrors in aero-optics

    NASA Astrophysics Data System (ADS)

    Trolinger, James D.; Hess, Cecil F.; Razavi, Payam; Furlong, Cosme

    2016-08-01

    Measuring and understanding the transient behavior of a surface with high spatial and temporal resolution are required in many areas of science. This paper describes the development and application of a high-speed, high-dynamic range, digital holographic interferometer for high-speed surface contouring with fractional wavelength precision and high-spatial resolution. The specific application under investigation here is to characterize deformable mirrors (DM) employed in aero-optics. The developed instrument was shown capable of contouring a deformable mirror with extremely high-resolution at frequencies exceeding 40 kHz. We demonstrated two different procedures for characterizing the mechanical response of a surface to a wide variety of input forces, one that employs a high-speed digital camera and a second that employs a low-speed, low-cost digital camera. The latter is achieved by cycling the DM actuators with a step input, producing a transient that typically lasts up to a millisecond before reaching equilibrium. Recordings are made at increasing times after the DM initiation from zero to equilibrium to analyze the transient. Because the wave functions are stored and reconstructable, they can be compared with each other to produce contours including absolute, difference, and velocity. High-speed digital cameras recorded the wave functions during a single transient at rates exceeding 40 kHz. We concluded that either method is fully capable of characterizing a typical DM to the extent required by aero-optical engineers.

  11. Use of a digital camera to monitor the growth and nitrogen status of cotton.

    PubMed

    Jia, Biao; He, Haibing; Ma, Fuyu; Diao, Ming; Jiang, Guiying; Zheng, Zhong; Cui, Jin; Fan, Hua

    2014-01-01

    The main objective of this study was to develop a nondestructive method for monitoring cotton growth and N status using a digital camera. Digital images were taken of the cotton canopies between emergence and full bloom. The green and red values were extracted from the digital images and then used to calculate canopy cover. The values of canopy cover were closely correlated with the normalized difference vegetation index and the ratio vegetation index and were measured using a GreenSeeker handheld sensor. Models were calibrated to describe the relationship between canopy cover and three growth properties of the cotton crop (i.e., aboveground total N content, LAI, and aboveground biomass). There were close, exponential relationships between canopy cover and three growth properties. And the relationships for estimating cotton aboveground total N content were most precise, the coefficients of determination (R(2)) value was 0.978, and the root mean square error (RMSE) value was 1.479 g m(-2). Moreover, the models were validated in three fields of high-yield cotton. The result indicated that the best relationship between canopy cover and aboveground total N content had an R(2) value of 0.926 and an RMSE value of 1.631 g m(-2). In conclusion, as a near-ground remote assessment tool, digital cameras have good potential for monitoring cotton growth and N status.

  12. Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field

    NASA Astrophysics Data System (ADS)

    Rubin, D. M.; Chezar, H.

    2007-12-01

    Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.

  13. Measuring Positions of Objects using Two or More Cameras

    NASA Technical Reports Server (NTRS)

    Klinko, Steve; Lane, John; Nelson, Christopher

    2008-01-01

    An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.

  14. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  15. Method for measuring the focal spot size of an x-ray tube using a coded aperture mask and a digital detector.

    PubMed

    Russo, Paolo; Mettivier, Giovanni

    2011-04-01

    The goal of this study is to evaluate a new method based on a coded aperture mask combined with a digital x-ray imaging detector for measurements of the focal spot sizes of diagnostic x-ray tubes. Common techniques for focal spot size measurements employ a pinhole camera, a slit camera, or a star resolution pattern. The coded aperture mask is a radiation collimator consisting of a large number of apertures disposed on a predetermined grid in an array, through which the radiation source is imaged onto a digital x-ray detector. The method of the coded mask camera allows one to obtain a one-shot accurate and direct measurement of the two dimensions of the focal spot (like that for a pinhole camera) but at a low tube loading (like that for a slit camera). A large number of small apertures in the coded mask operate as a "multipinhole" with greater efficiency than a single pinhole, but keeping the resolution of a single pinhole. X-ray images result from the multiplexed output on the detector image plane of such a multiple aperture array, and the image of the source is digitally reconstructed with a deconvolution algorithm. Images of the focal spot of a laboratory x-ray tube (W anode: 35-80 kVp; focal spot size of 0.04 mm) were acquired at different geometrical magnifications with two different types of digital detector (a photon counting hybrid silicon pixel detector with 0.055 mm pitch and a flat panel CMOS digital detector with 0.05 mm pitch) using a high resolution coded mask (type no-two-holes-touching modified uniformly redundant array) with 480 0.07 mm apertures, designed for imaging at energies below 35 keV. Measurements with a slit camera were performed for comparison. A test with a pinhole camera and with the coded mask on a computed radiography mammography unit with 0.3 mm focal spot was also carried out. The full width at half maximum focal spot sizes were obtained from the line profiles of the decoded images, showing a focal spot of 0.120 mm x 0.105 mm at 35 kVp and M = 6.1, with a detector entrance exposure as low as 1.82 mR (0.125 mA s tube load). The slit camera indicated a focal spot of 0.112 mm x 0.104 mm at 35 kVp and M = 3.15, with an exposure at the detector of 72 mR. Focal spot measurements with the coded mask could be performed up to 80 kVp. Tolerance to angular misalignment with the reference beam up to 7 degrees in in-plane rotations and 1 degrees deg in out-of-plane rotations was observed. The axial distance of the focal spot from the coded mask could also be determined. It is possible to determine the beam intensity via measurement of the intensity of the decoded image of the focal spot and via a calibration procedure. Coded aperture masks coupled to a digital area detector produce precise determinations of the focal spot of an x-ray tube with reduced tube loading and measurement time, coupled to a large tolerance in the alignment of the mask.

  16. CMOS image sensor with organic photoconductive layer having narrow absorption band and proposal of stack type solid-state image sensors

    NASA Astrophysics Data System (ADS)

    Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi

    2006-02-01

    Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.

  17. Pulsed spatial phase-shifting digital shearography based on a micropolarizer camera

    NASA Astrophysics Data System (ADS)

    Aranchuk, Vyacheslav; Lal, Amit K.; Hess, Cecil F.; Trolinger, James Davis; Scott, Eddie

    2018-02-01

    We developed a pulsed digital shearography system that utilizes the spatial phase-shifting technique. The system employs a commercial micropolarizer camera and a double pulse laser, which allows for instantaneous phase measurements. The system can measure dynamic deformation of objects as large as 1 m at a 2-m distance during the time between two laser pulses that range from 30 μs to 30 ms. The ability of the system to measure dynamic deformation was demonstrated by obtaining phase wrapped and unwrapped shearograms of a vibrating object.

  18. STS-116 MS Fuglesang uses digital camera on the STBD side of the S0 Truss during EVA 4

    NASA Image and Video Library

    2006-12-19

    S116-E-06882 (18 Dec. 2006) --- European Space Agency (ESA) astronaut Christer Fuglesang, STS-116 mission specialist, uses a digital still camera during the mission's fourth session of extravehicular activity (EVA) while Space Shuttle Discovery was docked with the International Space Station. Astronaut Robert L. Curbeam Jr. (out of frame), mission specialist, worked in tandem with Fuglesang, using specially-prepared, tape-insulated tools, to guide the array wing neatly inside its blanket box during the 6-hour, 38-minute spacewalk.

  19. Removal of instrument signature from Mariner 9 television images of Mars

    NASA Technical Reports Server (NTRS)

    Green, W. B.; Jepsen, P. L.; Kreznar, J. E.; Ruiz, R. M.; Schwartz, A. A.; Seidman, J. B.

    1975-01-01

    The Mariner 9 spacecraft was inserted into orbit around Mars in November 1971. The two vidicon camera systems returned over 7300 digital images during orbital operations. The high volume of returned data and the scientific objectives of the Television Experiment made development of automated digital techniques for the removal of camera system-induced distortions from each returned image necessary. This paper describes the algorithms used to remove geometric and photometric distortions from the returned imagery. Enhancement processing of the final photographic products is also described.

  20. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  1. A novel multi-digital camera system based on tilt-shift photography technology.

    PubMed

    Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi

    2015-03-31

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.

  2. Characterization of Vegetation using the UC Davis Remote Sensing Testbed

    NASA Astrophysics Data System (ADS)

    Falk, M.; Hart, Q. J.; Bowen, K. S.; Ustin, S. L.

    2006-12-01

    Remote sensing provides information about the dynamics of the terrestrial biosphere with continuous spatial and temporal coverage on many different scales. We present the design and construction of a suite of instrument modules and network infrastructure with size, weight and power constraints suitable for small scale vehicles, anticipating vigorous growth in unmanned aerial vehicles (UAV) and other mobile platforms. Our approach provides the rapid deployment and low cost acquisition of high aerial imagery for applications requiring high spatial resolution and revisits. The testbed supports a wide range of applications, encourages remote sensing solutions in new disciplines and demonstrates the complete range of engineering knowledge required for the successful deployment of remote sensing instruments. The initial testbed is deployed on a Sig Kadet Senior remote controlled plane. It includes an onboard computer with wireless radio, GPS, inertia measurement unit, 3-axis electronic compass and digital cameras. The onboard camera is either a RGB digital camera or a modified digital camera with red and NIR channels. Cameras were calibrated using selective light sources, an integrating spheres and a spectrometer, allowing for the computation of vegetation indices such as the NDVI. Field tests to date have investigated technical challenges in wireless communication bandwidth limits, automated image geolocation, and user interfaces; as well as image applications such as environmental landscape mapping focusing on Sudden Oak Death and invasive species detection, studies on the impact of bird colonies on tree canopies, and precision agriculture.

  3. Method for acquiring, storing and analyzing crystal images

    NASA Technical Reports Server (NTRS)

    Gester, Thomas E. (Inventor); Rosenblum, William M. (Inventor); Christopher, Gayle K. (Inventor); Hamrick, David T. (Inventor); Delucas, Lawrence J. (Inventor); Tillotson, Brian (Inventor)

    2003-01-01

    A system utilizing a digital computer for acquiring, storing and evaluating crystal images. The system includes a video camera (12) which produces a digital output signal representative of a crystal specimen positioned within its focal window (16). The digitized output from the camera (12) is then stored on data storage media (32) together with other parameters inputted by a technician and relevant to the crystal specimen. Preferably, the digitized images are stored on removable media (32) while the parameters for different crystal specimens are maintained in a database (40) with indices to the digitized optical images on the other data storage media (32). Computer software is then utilized to identify not only the presence and number of crystals and the edges of the crystal specimens from the optical image, but to also rate the crystal specimens by various parameters, such as edge straightness, polygon formation, aspect ratio, surface clarity, crystal cracks and other defects or lack thereof, and other parameters relevant to the quality of the crystals.

  4. Exploring of PST-TBPM in Monitoring Bridge Dynamic Deflection in Vibration

    NASA Astrophysics Data System (ADS)

    Zhang, Guojian; Liu, Shengzhen; Zhao, Tonglong; Yu, Chengxin

    2018-01-01

    This study adopts digital photography to monitor bridge dynamic deflection in vibration. Digital photography used in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, a digital camera is used to monitor the bridge in static as a zero image. Then, the digital camera is used to monitor the bridge in vibration every three seconds as the successive images. Based on the reference system, PST-TBPM is used to calculate the images to obtain the bridge dynamic deflection in vibration. Results show that the average measurement accuracies are 0.615 pixels and 0.79 pixels in X and Z direction. The maximal deflection of the bridge is 7.14 pixels. PST-TBPM is valid in solving the problem-the photographing direction not perpendicular to the bridge. Digital photography used in this study can assess the bridge health through monitoring the bridge dynamic deflection in vibration. The deformation trend curves depicted over time also can warn the possible dangers.

  5. Accuracy Assessment of Lidar-Derived Digital Terrain Model (dtm) with Different Slope and Canopy Cover in Tropical Forest Region

    NASA Astrophysics Data System (ADS)

    Salleh, M. R. M.; Ismail, Z.; Rahman, M. Z. A.

    2015-10-01

    Airborne Light Detection and Ranging (LiDAR) technology has been widely used recent years especially in generating high accuracy of Digital Terrain Model (DTM). High density and good quality of airborne LiDAR data promises a high quality of DTM. This study focussing on the analysing the error associated with the density of vegetation cover (canopy cover) and terrain slope in a LiDAR derived-DTM value in a tropical forest environment in Bentong, State of Pahang, Malaysia. Airborne LiDAR data were collected can be consider as low density captured by Reigl system mounted on an aircraft. The ground filtering procedure use adaptive triangulation irregular network (ATIN) algorithm technique in producing ground points. Next, the ground control points (GCPs) used in generating the reference DTM and these DTM was used for slope classification and the point clouds belong to non-ground are then used in determining the relative percentage of canopy cover. The results show that terrain slope has high correlation for both study area (0.993 and 0.870) with the RMSE of the LiDAR-derived DTM. This is similar to canopy cover where high value of correlation (0.989 and 0.924) obtained. This indicates that the accuracy of airborne LiDAR-derived DTM is significantly affected by terrain slope and canopy caver of study area.

  6. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  7. Improved TDEM formation using fused ladar/digital imagery from a low-cost small UAV

    NASA Astrophysics Data System (ADS)

    Khatiwada, Bikalpa; Budge, Scott E.

    2017-05-01

    Formation of a Textured Digital Elevation Model (TDEM) has been useful in many applications in the fields of agriculture, disaster response, terrain analysis and more. Use of a low-cost small UAV system with a texel camera (fused lidar/digital imagery) can significantly reduce the cost compared to conventional aircraft-based methods. This paper reports continued work on this problem reported in a previous paper by Bybee and Budge, and reports improvements in performance. A UAV fitted with a texel camera is flown at a fixed height above the terrain and swaths of texel image data of the terrain below is taken continuously. Each texel swath has one or more lines of lidar data surrounded by a narrow strip of EO data. Texel swaths are taken such that there is some overlap from one swath to its adjacent swath. The GPS/IMU fitted on the camera also give coarse knowledge of attitude and position. Using this coarse knowledge and the information from the texel image, the error in the camera position and attitude is reduced which helps in producing an accurate TDEM. This paper reports improvements in the original work by using multiple lines of lidar data per swath. The final results are shown and analyzed for numerical accuracy.

  8. Use of ALS data for digital terrain extraction and roughness parametrization in floodplain areas

    NASA Astrophysics Data System (ADS)

    Idda, B.; Nardinocchi, C.; Marsella, M.

    2009-04-01

    In order to undertake structural and land planning actions aimed at improving risk thresholds and vulnerability associated to floodplain inundation, the evaluation of the area concerning the channel overflowing from his natural embankments it is of essential importance. Floodplain models requires the analysis of historical floodplains extensions, ground's morphological structure and hydraulic measurements. Within this set of information, a more detailed characterization about the hydraulic roughness, which controls the velocity to the hydraulic flow, is a interesting challenge to achieve a 2D spatial distribution into the model. Remote sensing optical and radar sensors techniques can be applied to generate 2D and 3D map products useful to perimeter floodplains extension during the main event and extrapolate river cross-sections. Among these techniques, it is unquestionable the enhancement that the Airborne Laser Scanner (ALS) have brought for its capability to extract high resolution and accurate Digital Terrain Models. In hydraulic applications, a number of studies investigated the use of ALS for DTM generation and approached the quantitative estimations of the hydraulic roughness. The aim of this work is the generation of a digital terrain model and the estimation of hydraulic parameters useful for floodplains models from Airborne Laser Scanner data collected in a test area, which encloses a portion of a drainage basin of the Mela river (Sicily, Italy). From the Airborne Laser Scanner dataset, a high resolution Digital Elevation Model was first created, then after applying filtering and classification processes, a dedicated procedure was implemented to assess automatically a value for the hydraulic roughness coefficient (in Manning's formulation) per each point interested in the floodplain. The obtained results allowed to generate maps of equal roughness, hydraulic level depending, based on the application of empirical formulas for specific-type vegetation at each classified ALS point.

  9. Confocal Retinal Imaging Using a Digital Light Projector with a Near Infrared VCSEL Source

    PubMed Central

    Muller, Matthew S.; Elsner, Ann E.

    2018-01-01

    A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1″ LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging. PMID:29899586

  10. Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System

    NASA Astrophysics Data System (ADS)

    Stebner, K.; Wieden, A.

    2014-03-01

    Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.

  11. Practical image registration concerns overcome by the weighted and filtered mutual information metric

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Saber, Eli; Rhody, Harvey; Savakis, Andreas; Raj, Jeffrey

    2012-04-01

    Contemporary research in automated panorama creation utilizes camera calibration or extensive knowledge of camera locations and relations to each other to achieve successful results. Research in image registration attempts to restrict these same camera parameters or apply complex point-matching schemes to overcome the complications found in real-world scenarios. This paper presents a novel automated panorama creation algorithm by developing an affine transformation search based on maximized mutual information (MMI) for region-based registration. Standard MMI techniques have been limited to applications with airborne/satellite imagery or medical images. We show that a novel MMI algorithm can approximate an accurate registration between views of realistic scenes of varying depth distortion. The proposed algorithm has been developed using stationary, color, surveillance video data for a scenario with no a priori camera-to-camera parameters. This algorithm is robust for strict- and nearly-affine-related scenes, while providing a useful approximation for the overlap regions in scenes related by a projective homography or a more complex transformation, allowing for a set of efficient and accurate initial conditions for pixel-based registration.

  12. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    NASA Astrophysics Data System (ADS)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  13. Method for the visualization of landform by mapping using low altitude UAV application

    NASA Astrophysics Data System (ADS)

    Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William

    2018-05-01

    Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.

  14. Measurement of solar extinction in tower plants with digital cameras

    NASA Astrophysics Data System (ADS)

    Ballestrín, J.; Monterreal, R.; Carra, M. E.; Fernandez-Reche, J.; Barbero, J.; Marzo, A.

    2016-05-01

    Atmospheric extinction of solar radiation between the heliostat field and the receiver is accepted as a non-negligible source of energy loss in the increasingly large central receiver plants. However, the reality is that there is currently no reliable measurement method for this quantity and at present these plants are designed, built and operated without knowing this local parameter. Nowadays digital cameras are used in many scientific applications for their ability to convert available light into digital images. Its broad spectral range, high resolution and high signal to noise ratio, make them an interesting device in solar technology. In this work a method for atmospheric extinction measurement based on digital images is presented. The possibility of defining a measurement setup in circumstances similar to those of a tower plant increases the credibility of the method. This procedure is currently being implemented at Plataforma Solar de Almería.

  15. Non-Invasive Detection of Anaemia Using Digital Photographs of the Conjunctiva.

    PubMed

    Collings, Shaun; Thompson, Oliver; Hirst, Evan; Goossens, Louise; George, Anup; Weinkove, Robert

    2016-01-01

    Anaemia is a major health burden worldwide. Although the finding of conjunctival pallor on clinical examination is associated with anaemia, inter-observer variability is high, and definitive diagnosis of anaemia requires a blood sample. We aimed to detect anaemia by quantifying conjunctival pallor using digital photographs taken with a consumer camera and a popular smartphone. Our goal was to develop a non-invasive screening test for anaemia. The conjunctivae of haemato-oncology in- and outpatients were photographed in ambient lighting using a digital camera (Panasonic DMC-LX5), and the internal rear-facing camera of a smartphone (Apple iPhone 5S) alongside an in-frame calibration card. Following image calibration, conjunctival erythema index (EI) was calculated and correlated with laboratory-measured haemoglobin concentration. Three clinicians independently evaluated each image for conjunctival pallor. Conjunctival EI was reproducible between images (average coefficient of variation 2.96%). EI of the palpebral conjunctiva correlated more strongly with haemoglobin concentration than that of the forniceal conjunctiva. Using the compact camera, palpebral conjunctival EI had a sensitivity of 93% and 57% and specificity of 78% and 83% for detection of anaemia (haemoglobin < 110 g/L) in training and internal validation sets, respectively. Similar results were found using the iPhone camera, though the EI cut-off value differed. Conjunctival EI analysis compared favourably with clinician assessment, with a higher positive likelihood ratio for prediction of anaemia. Erythema index of the palpebral conjunctiva calculated from images taken with a compact camera or mobile phone correlates with haemoglobin and compares favourably to clinician assessment for prediction of anaemia. If confirmed in further series, this technique may be useful for the non-invasive screening for anaemia.

  16. A data reduction technique and associated computer program for obtaining vehicle attitudes with a single onboard camera

    NASA Technical Reports Server (NTRS)

    Bendura, R. J.; Renfroe, P. G.

    1974-01-01

    A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.

  17. A time domain simulation of a beam control system

    NASA Astrophysics Data System (ADS)

    Mitchell, J. R.

    1981-02-01

    The Airborne Laser Laboratory (ALL) is being developed by the Air Force to investigate the integration and operation of high energy laser components in a dynamic airborne environment and to study the propagation of laser light from an airborne vehicle to an airborne target. The ALL is composed of several systems; among these are the Airborne Pointing and Tracking System (APT) and the Automatic Alignment System (AAS). This report presents the results of performing a time domain dynamic simulation for an integrated beam control system composed of the APT and AAS. The simulation is performed on a digital computer using the MIMIC language. It includes models of the dynamics of the system and of disturbances. Also presented in the report are the rationales and developments of these models. The data from the simulation code is summarized by several plots. In addition results from massaging the data with waveform analysis packages are presented. The results are discussed and conclusions are drawn.

  18. Professional Caregivers’ Perceptions on how Persons with Mild Dementia Might Experience the Usage of a Digital Photo Diary

    PubMed Central

    Harrefors, Christina; Sävenstedt, Stefan; Lundquist, Anders; Lundquist, Bengt; Axelsson, Karin

    2012-01-01

    Cognitive impairments influence the possibility of persons with dementia to remember daily events and maintain a sense of self. In order to address these problems a digital photo diary was developed to capture information about events in daily life. The device consisted of a wearable digital camera, smart phone with Global Positioning System (GPS) and a home memory station with computer for uploading the photographs and touch screen. The aim of this study was to describe professional caregiver’s perceptions on how persons with mild dementia might experience the usage of this digital photo diary from both a situation when wearing the camera and a situation when viewing the uploaded photos, through a questionnaire with 408 respondents. In order to catch the professional caregivers’ perceptions a questionnaire with the semantic differential technique was used and the main question was “How do you think Hilda (the fictive person in the questionnaire) feels when she is using the digital photo diary?”. The factor analysis revealed three factors; Sense of autonomy, Sense of self-esteem and Sense of trust. An interesting conclusion that can be drawn is that professional caregivers had an overall positive view of the usage of digital photo diary as supporting autonomy for persons with mild dementia. The meaningfulness of each situation when wearing the camera and viewing the uploaded pictures to be used in two different situations and a part of an integrated assistive device has to be considered separately. Individual needs and desires of the person who is living with dementia and the context of each individual has to be reflected on and taken into account before implementing assistive digital devices as a tool in care. PMID:22509232

  19. Final Technical Report

    NASA Technical Reports Server (NTRS)

    Harper, D. A.

    1997-01-01

    Support for reduction and analysis of observations made with the Yorkes Observatory 60-channel far infrared camera on the Kuiper Airborne Observatory was funded through Federal Grant. Data was reduced and made available to the research group at Yorkes and guest observers, The reduced date is indexed on the World Wide Web. Portion of the data have been reported in the attached references.

  20. Airborne electromagnetic and magnetic survey data of the Paradox and San Luis Valleys, Colorado

    USGS Publications Warehouse

    Ball, Lyndsay B.; Bloss, Benjamin R.; Bedrosian, Paul A.; Grauch, V.J.S.; Smith, Bruce D.

    2015-01-01

    In October 2011, the U.S. Geological Survey (USGS) contracted airborne magnetic and electromagnetic surveys of the Paradox and San Luis Valleys in southern Colorado, United States. These airborne geophysical surveys provide high-resolution and spatially comprehensive datasets characterizing the resistivity structure of the shallow subsurface of each survey region, accompanied by magnetic-field information over matching areas. These data were collected to provide insight into the distribution of groundwater brine in the Paradox Valley, the extent of clay aquitards in the San Luis Valley, and to improve our understanding of the geologic framework for both regions. This report describes these contracted surveys and releases digital data supplied under contract to the USGS.

  1. Water depth measurement using an airborne pulsed neon laser system

    NASA Technical Reports Server (NTRS)

    Hoge, F. E.; Swift, R. N.; Frederick, E. B.

    1980-01-01

    The paper presents the water depth measurement using an airborne pulsed neon laser system. The results of initial base-line field test results of NASA airborne oceanographic lidar in the bathymetry mode are given, with water-truth measurements of depth and beam attenuation coefficients by boat taken at the same time as overflights to aid in determining the system's operational performance. The nadir-angle tests and field-of-view data are presented; this laser bathymetry system is an improvement over prior models in that (1) the surface-to-bottom pulse waveform is digitally recorded on magnetic tape, and (2) wide-swath mapping data may be routinely acquired using a 30 deg full-angle conical scanner.

  2. ATM Coastal Topography-Alabama 2001

    USGS Publications Warehouse

    Nayegandhi, Amar; Yates, Xan; Brock, John C.; Sallenger, A.H.; Bonisteel, Jamie M.; Klipp, Emily S.; Wright, C. Wayne

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced collaboratively by the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Alabama coastline, acquired October 3-4, 2001. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative scanning Lidar instrument originally developed by NASA, and known as the Airborne Topographic Mapper (ATM), was used during data acquisition. The ATM system is a scanning Lidar system that measures high-resolution topography of the land surface, and incorporates a green-wavelength laser operating at pulse rates of 2 to 10 kilohertz. Measurements from the laser ranging device are coupled with data acquired from inertial navigation system (INS) attitude sensors and differentially corrected global positioning system (GPS) receivers to measure topography of the surface at accuracies of +/-15 centimeters. The nominal ATM platform is a Twin Otter or P-3 Orion aircraft, but the instrument may be deployed on a range of light aircraft. Elevation measurements were collected over the survey area using the ATM system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for pre-survey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography.

  3. ATM Coastal Topography-Florida 2001: Eastern Panhandle

    USGS Publications Warehouse

    Yates, Xan; Nayegandhi, Amar; Brock, John C.; Sallenger, A.H.; Bonisteel, Jamie M.; Klipp, Emily S.; Wright, C. Wayne

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced collaboratively by the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the eastern Florida panhandle coastline, acquired October 2, 2001. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative scanning Lidar instrument originally developed by NASA, and known as the Airborne Topographic Mapper (ATM), was used during data acquisition. The ATM system is a scanning Lidar system that measures high-resolution topography of the land surface and incorporates a green-wavelength laser operating at pulse rates of 2 to 10 kilohertz. Measurements from the laser-ranging device are coupled with data acquired from inertial navigation system (INS) attitude sensors and differentially corrected global positioning system (GPS) receivers to measure topography of the surface at accuracies of +/-15 centimeters. The nominal ATM platform is a Twin Otter or P-3 Orion aircraft, but the instrument may be deployed on a range of light aircraft. Elevation measurements were collected over the survey area using the ATM system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography.

  4. The First Four Year's of Orthoimages from NEON's Airborne Observation Platform

    NASA Astrophysics Data System (ADS)

    Adler, J.; Gallery, W. O.

    2016-12-01

    The National Ecological Observatory Network (NEON), funded by the National Science Foundation (NSF), has been collecting orthorectified images in conjunction with lidar and spectrometer data for the past four years. The NEON project breaks up the United States into 20 regional areas from Puerto Rico to the North Slope of Alaska, with each region (Domain) typically having three specific sites of interest. Each site spans between 100km2 - 720km2 in area. Currently there are over 125,000 orthorectified images online from 6 Domains for the public and scientific community to freely download. These images are expected to assist researchers in many areas including vegetation cover, dominant vegetation type, and environmental change detection. In 2016 the NEON Airborne Observation Platform (AOP) group has collected digital imagery at 8.5 cm resolution over approximately 30 sites, for a total of 60,000 orthoimages. This presentation details the current status of the surveys conducted to date, and describes the scientific details of how NEON publishes Level 1 and Level 3 products. In particular, the onboard lidar system's contribution to the orthorectification process is outlined, in addition to the routines utilized for correcting white balance and exposure. Additionally, key flight parameters needed to produce NEON's complementary data of multi-sensor (camera/lidar/spectrometer) instruments are discussed. Problems with validating the orthoimages with the coarser resolution lidar system are addressed, to include the utilization of ground-truth locations. Lastly, methods to access NEON's publically available 10cm resolution orthoimages (in both individual image format, and in 1km by 1km tiles) are presented. A brief overview of the 2017 field season's nine new sites completes the presentation.

  5. Rapid mapping of ultrafine fault zone topography with structure from motion

    USGS Publications Warehouse

    Johnson, Kendra; Nissen, Edwin; Saripalli, Srikanth; Arrowsmith, J. Ramón; McGarey, Patrick; Scharer, Katherine M.; Williams, Patrick; Blisniuk, Kimberly

    2014-01-01

    Structure from Motion (SfM) generates high-resolution topography and coregistered texture (color) from an unstructured set of overlapping photographs taken from varying viewpoints, overcoming many of the cost, time, and logistical limitations of Light Detection and Ranging (LiDAR) and other topographic surveying methods. This paper provides the first investigation of SfM as a tool for mapping fault zone topography in areas of sparse or low-lying vegetation. First, we present a simple, affordable SfM workflow, based on an unmanned helium balloon or motorized glider, an inexpensive camera, and semiautomated software. Second, we illustrate the system at two sites on southern California faults covered by existing airborne or terrestrial LiDAR, enabling a comparative assessment of SfM topography resolution and precision. At the first site, an ∼0.1 km2 alluvial fan on the San Andreas fault, a colored point cloud of density mostly >700 points/m2 and a 3 cm digital elevation model (DEM) and orthophoto were produced from 233 photos collected ∼50 m above ground level. When a few global positioning system ground control points are incorporated, closest point vertical distances to the much sparser (∼4 points/m2) airborne LiDAR point cloud are mostly 530 points/m2 and a 2 cm DEM and orthophoto were produced from 450 photos taken from ∼60 m above ground level. Closest point vertical distances to existing terrestrial LiDAR data of comparable density are mostly <6 cm. Each SfM survey took ∼2 h to complete and several hours to generate the scene topography and texture. SfM greatly facilitates the imaging of subtle geomorphic offsets related to past earthquakes as well as rapid response mapping or long-term monitoring of faulted landscapes.

  6. STS-93 Commander Collins uses a digital camera on the middeck of Columbia

    NASA Image and Video Library

    2013-11-18

    STS093-347-015 (23-27 July 1999) --- Astronaut Eileen M. Collins, mission commander, loads a roll of film into a still camera on Columbia's middeck. Collins is the first woman mission commander in the history of human space flight.

  7. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  8. Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.

    PubMed

    Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue

    2015-01-01

    A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.

  9. BAE Systems' 17μm LWIR camera core for civil, commercial, and military applications

    NASA Astrophysics Data System (ADS)

    Lee, Jeffrey; Rodriguez, Christian; Blackwell, Richard

    2013-06-01

    Seventeen (17) µm pixel Long Wave Infrared (LWIR) Sensors based on vanadium oxide (VOx) micro-bolometers have been in full rate production at BAE Systems' Night Vision Sensors facility in Lexington, MA for the past five years.[1] We introduce here a commercial camera core product, the Airia-MTM imaging module, in a VGA format that reads out in 30 and 60Hz progressive modes. The camera core is architected to conserve power with all digital interfaces from the readout integrated circuit through video output. The architecture enables a variety of input/output interfaces including Camera Link, USB 2.0, micro-display drivers and optional RS-170 analog output supporting legacy systems. The modular board architecture of the electronics facilitates hardware upgrades allow us to capitalize on the latest high performance low power electronics developed for the mobile phones. Software and firmware is field upgradeable through a USB 2.0 port. The USB port also gives users access to up to 100 digitally stored (lossless) images.

  10. High resolution multispectral photogrammetric imagery: enhancement, interpretation and evaluations

    NASA Astrophysics Data System (ADS)

    Roberts, Arthur; Haefele, Martin; Bostater, Charles; Becker, Thomas

    2007-10-01

    A variety of aerial mapping cameras were adapted and developed into simulated multiband digital photogrammetric mapping systems. Direct digital multispectral, two multiband cameras (IIS 4 band and Itek 9 band) and paired mapping and reconnaissance cameras were evaluated for digital spectral performance and photogrammetric mapping accuracy in an aquatic environment. Aerial films (24cm X 24cm format) tested were: Agfa color negative and extended red (visible and near infrared) panchromatic, and; Kodak color infrared and B&W (visible and near infrared) infrared. All films were negative processed to published standards and digitally converted at either 16 (color) or 10 (B&W) microns. Excellent precision in the digital conversions was obtained with scanning errors of less than one micron. Radiometric data conversion was undertaken using linear density conversion and centered 8 bit histogram exposure. This resulted in multiple 8 bit spectral image bands that were unaltered (not radiometrically enhanced) "optical count" conversions of film density. This provided the best film density conversion to a digital product while retaining the original film density characteristics. Data covering water depth, water quality, surface roughness, and bottom substrate were acquired using different measurement techniques as well as different techniques to locate sampling points on the imagery. Despite extensive efforts to obtain accurate ground truth data location errors, measurement errors, and variations in the correlation between water depth and remotely sensed signal persisted. These errors must be considered endemic and may not be removed through even the most elaborate sampling set up. Results indicate that multispectral photogrammetric systems offer improved feature mapping capability.

  11. Error rate of automated calculation for wound surface area using a digital photography.

    PubMed

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Applications of Action Cam Sensors in the Archaeological Yard

    NASA Astrophysics Data System (ADS)

    Pepe, M.; Ackermann, S.; Fregonese, L.; Fassi, F.; Adami, A.

    2018-05-01

    In recent years, special digital cameras called "action camera" or "action cam", have become popular due to their low price, smallness, lightness, strength and capacity to make videos and photos even in extreme environment surrounding condition. Indeed, these particular cameras have been designed mainly to capture sport actions and work even in case of dirt, bumps, or underwater and at different external temperatures. High resolution of Digital single-lens reflex (DSLR) cameras are usually preferred to be employed in photogrammetric field. Indeed, beyond the sensor resolution, the combination of such cameras with fixed lens with low distortion are preferred to perform accurate 3D measurements; at the contrary, action cameras have small and wide-angle lens, with a lower performance in terms of sensor resolution, lens quality and distortions. However, by considering the characteristics of the action cameras to acquire under conditions that may result difficult for standard DSLR cameras and because of their lower price, these could be taken into consideration as a possible and interesting approach during archaeological excavation activities to document the state of the places. In this paper, the influence of lens radial distortion and chromatic aberration on this type of cameras in self-calibration mode and an evaluation of their application in the field of Cultural Heritage will be investigated and discussed. Using a suitable technique, it has been possible to improve the accuracy of the 3D model obtained by action cam images. Case studies show the quality and the utility of the use of this type of sensor in the survey of archaeological artefacts.

  13. Remote sensing for hurricane Andrew impact assessment

    NASA Technical Reports Server (NTRS)

    Davis, Bruce A.; Schmidt, Nicholas

    1994-01-01

    Stennis Space Center personnel flew a Learjet equipped with instrumentation designed to acquire imagery in many spectral bands into areas most damaged by Hurricane Andrew. The calibrated airborne multispectral scanner (CAMS), a NASA-developed sensor, and a Zeiss camera acquired images of these areas. The information derived from the imagery was used to assist Florida officials in assessing the devastation caused by the hurricane. The imagery provided the relief teams with an assessment of the debris covering roads and highways so cleanup plans could be prioritized. The imagery also mapped the level of damage in residential and commercial areas of southern Florida and provided maps of beaches and land cover for determination of beach loss and vegetation damage, particularly the mangrove population. Stennis Space Center personnel demonstrated the ability to respond quickly and the value of such response in an emergency situation. The digital imagery from the CAMS can be processed, analyzed, and developed into products for field crews faster than conventional photography. The resulting information is versatile and allows for rapid updating and editing. Stennis Space Center and state officials worked diligently to compile information to complete analyses of the hurricane's impact.

  14. The high resolution stereo camera (HRSC): acquisition of multi-spectral 3D-data and photogrammetric processing

    NASA Astrophysics Data System (ADS)

    Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus

    2017-11-01

    At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.

  15. Standoff aircraft IR characterization with ABB dual-band hyper spectral imager

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Lantagne, Stéphane; Bullis, Ritchie D.; Roy, Claude; Vallières, Christian; Levesque, Luc

    2012-09-01

    Remote sensing infrared characterization of rapidly evolving events generally involves the combination of a spectro-radiometer and infrared camera(s) as separated instruments. Time synchronization, spatial coregistration, consistent radiometric calibration and managing several systems are important challenges to overcome; they complicate the target infrared characterization data processing and increase the sources of errors affecting the final radiometric accuracy. MR-i is a dual-band Hyperspectal imaging spectro-radiometer, that combines two 256 x 256 pixels infrared cameras and an infrared spectro-radiometer into one single instrument. This field instrument generates spectral datacubes in the MWIR and LWIR. It is designed to acquire the spectral signatures of rapidly evolving events. The design is modular. The spectrometer has two output ports configured with two simultaneously operated cameras to either widen the spectral coverage or to increase the dynamic range of the measured amplitudes. Various telescope options are available for the input port. Recent platform developments and field trial measurements performances will be presented for a system configuration dedicated to the characterization of airborne targets.

  16. Development of an imaging method for quantifying a large digital PCR droplet

    NASA Astrophysics Data System (ADS)

    Huang, Jen-Yu; Lee, Shu-Sheng; Hsu, Yu-Hsiang

    2017-02-01

    Portable devices have been recognized as the future linkage between end-users and lab-on-a-chip devices. It has a user friendly interface and provides apps to interface headphones, cameras, and communication duct, etc. In particular, the digital resolution of cameras installed in smartphones or pads already has a high imaging resolution with a high number of pixels. This unique feature has triggered researches to integrate optical fixtures with smartphone to provide microscopic imaging capabilities. In this paper, we report our study on developing a portable diagnostic tool based on the imaging system of a smartphone and a digital PCR biochip. A computational algorithm is developed to processing optical images taken from a digital PCR biochip with a smartphone in a black box. Each reaction droplet is recorded in pixels and is analyzed in a sRGB (red, green, and blue) color space. Multistep filtering algorithm and auto-threshold algorithm are adopted to minimize background noise contributed from ccd cameras and rule out false positive droplets, respectively. Finally, a size-filtering method is applied to identify the number of positive droplets to quantify target's concentration. Statistical analysis is then performed for diagnostic purpose. This process can be integrated in an app and can provide a user friendly interface without professional training.

  17. Automatic rice crop height measurement using a field server and digital image processing.

    PubMed

    Sritarapipat, Tanakorn; Rakwatin, Preesan; Kasetkasem, Teerasit

    2014-01-07

    Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the height of the marker bar compared to the height of the initial marker bar. Four digital image processing steps are employed to automatically measure the rice crop height: band selection, filtering, thresholding, and height measurement. Band selection is used to remove redundant features. Filtering extracts significant features of the marker bar. The thresholding method is applied to separate objects and boundaries of the marker bar versus other areas. The marker bar is detected and compared with the initial marker bar to measure the rice crop height. Our experiment used a field server with a digital camera to continuously monitor a rice field located in Suphanburi Province, Thailand. The experimental results show that the proposed method measures rice crop height effectively, with no human intervention required.

  18. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    USGS Publications Warehouse

    Hobbs, Michael T.; Brehme, Cheryl S.

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  19. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.

    PubMed

    Hobbs, Michael T; Brehme, Cheryl S

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  20. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

Top