Sample records for digital camera imagery

  1. Integration of near-surface remote sensing and eddy covariance measurements: new insights on managed ecosystem structure and functioning

    NASA Astrophysics Data System (ADS)

    Hatala, J.; Sonnentag, O.; Detto, M.; Runkle, B.; Vargas, R.; Kelly, M.; Baldocchi, D. D.

    2009-12-01

    Ground-based, visible light imagery has been used for different purposes in agricultural and ecological research. A series of recent studies explored the utilization of networked digital cameras to continuously monitor vegetation by taking oblique canopy images at fixed view angles and time intervals. In our contribution we combine high temporal resolution digital camera imagery, eddy-covariance, and meteorological measurements with weekly field-based hyperspectral and LAI measurements to gain new insights on temporal changes in canopy structure and functioning of two managed ecosystems in California’s Sacramento-San Joaquin River Delta: a pasture infested by the invasive perennial pepperweed (Lepidium latifolium) and a rice plantation (Oryza sativa). Specific questions we address are: a) how does year-round grazing affect pepperweed canopy development, b) is it possible to identify phenological key events of managed ecosystems (pepperweed: flowering; rice: heading) from the limited spectral information of digital camera imagery, c) is a simple greenness index derived from digital camera imagery sufficient to track leaf area index and canopy development of managed ecosystems, and d) what are the scales of temporal correlation between digital camera signals and carbon and water fluxes of managed ecosystems? Preliminary results for the pasture-pepperweed ecosystem show that year-round grazing inhibits the accumulation of dead stalks causing earlier green-up and that digital camera imagery is well suited to capture the onset of flowering and the associated decrease in photosynthetic CO2 uptake. Results from our analyses are of great relevance from both a global environmental change and land management perspective.

  2. PhenoCam Dataset v1.0: Vegetation Phenology from Digital Camera Imagery, 2000-2015

    USDA-ARS?s Scientific Manuscript database

    This data set provides a time series of vegetation phenological observations for 133 sites across diverse ecosystems of North America and Europe from 2000-2015. The phenology data were derived from conventional visible-wavelength automated digital camera imagery collected through the PhenoCam Networ...

  3. Evaluation of orthomosics and digital surface models derived from aerial imagery for crop mapping

    USDA-ARS?s Scientific Manuscript database

    Orthomosics derived from aerial imagery acquired by consumer-grade cameras have been used for crop mapping. However, digital surface models (DSM) derived from aerial imagery have not been evaluated for this application. In this study, a novel method was proposed to extract crop height from DSM and t...

  4. Multi-sensor fusion over the World Trade Center disaster site

    NASA Astrophysics Data System (ADS)

    Rodarmel, Craig; Scott, Lawrence; Simerlink, Deborah A.; Walker, Jeffrey

    2002-09-01

    The immense size and scope of the rescue and clean-up of the World Trade Center site created a need for data that would provide a total overview of the disaster area. To fulfill this need, the New York State Office for Technology (NYSOFT) contracted with EarthData International to collect airborne remote sensing data over Ground Zero with an airborne light detection and ranging (LIDAR) sensor, a high-resolution digital camera, and a thermal camera. The LIDAR data provided a three-dimensional elevation model of the ground surface that was used for volumetric calculations and also in the orthorectification of the digital images. The digital camera provided high-resolution imagery over the site to aide the rescuers in placement of equipment and other assets. In addition, the digital imagery was used to georeference the thermal imagery and also provided the visual background for the thermal data. The thermal camera aided in the location and tracking of underground fires. The combination of data from these three sensors provided the emergency crews with a timely, accurate overview containing a wealth of information of the rapidly changing disaster site. Because of the dynamic nature of the site, the data was acquired on a daily basis, processed, and turned over to NYSOFT within twelve hours of the collection. During processing, the three datasets were combined and georeferenced to allow them to be inserted into the client's geographic information systems.

  5. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    NASA Astrophysics Data System (ADS)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic errors were modeled by analyzing residuals using correction grid. The results of the final bundle adjustments are sufficient to enable Sanborn to produce DEM/DTM and orthophotos from the nadir imagery and create 3D models using georeferenced oblique imagery.

  6. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  7. Evaluation of Remotely Sensed Data for the Application of Geospatial Techniques to Assess Hurricane Impacts on Coastal Bird Habitat

    DTIC Science & Technology

    2009-08-01

    habitat analysis because of the high horizontal error between the mosaicked image tiles . The imagery was collected with a non-metric camera and likewise...possible with true color imagery (digital orthophotos ) or multispectral imagery, but usually comes at a much higher cost. Due to its availability and

  8. Traceable Calibration, Performance Metrics, and Uncertainty Estimates of Minirhizotron Digital Imagery for Fine-Root Measurements

    PubMed Central

    Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward

    2014-01-01

    Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023

  9. Viking image processing. [digital stereo imagery and computer mosaicking

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  10. Method for the visualization of landform by mapping using low altitude UAV application

    NASA Astrophysics Data System (ADS)

    Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William

    2018-05-01

    Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.

  11. STS-53 Discovery, OV-103, DOD Hercules digital electronic imagery equipment

    NASA Technical Reports Server (NTRS)

    1992-01-01

    STS-53 Discovery, Orbiter Vehicle (OV) 103, Department of Defense (DOD) mission Hand-held Earth-oriented Real-time Cooperative, User-friendly, Location, targeting, and Environmental System (Hercules) spaceborne experiment equipment is documented in this table top view. HERCULES is a joint NAVY-NASA-ARMY payload designed to provide real-time high resolution digital electronic imagery and geolocation (latitude and longitude determination) of earth surface targets of interest. HERCULES system consists of (from left to right): a specially modified GRID Systems portable computer mounted atop NASA developed Playback-Downlink Unit (PDU) and the Naval Research Laboratory (NRL) developed HERCULES Attitude Processor (HAP); the NASA-developed Electronic Still Camera (ESC) Electronics Box (ESCEB) including removable imagery data storage disks and various connecting cables; the ESC (a NASA modified Nikon F-4 camera) mounted atop the NRL HERCULES Inertial Measurement Unit (HIMU) containing the three

  12. STS-53 Discovery, OV-103, DOD Hercules digital electronic imagery equipment

    NASA Image and Video Library

    1992-04-22

    STS-53 Discovery, Orbiter Vehicle (OV) 103, Department of Defense (DOD) mission Hand-held Earth-oriented Real-time Cooperative, User-friendly, Location, targeting, and Environmental System (Hercules) spaceborne experiment equipment is documented in this table top view. HERCULES is a joint NAVY-NASA-ARMY payload designed to provide real-time high resolution digital electronic imagery and geolocation (latitude and longitude determination) of earth surface targets of interest. HERCULES system consists of (from left to right): a specially modified GRID Systems portable computer mounted atop NASA developed Playback-Downlink Unit (PDU) and the Naval Research Laboratory (NRL) developed HERCULES Attitude Processor (HAP); the NASA-developed Electronic Still Camera (ESC) Electronics Box (ESCEB) including removable imagery data storage disks and various connecting cables; the ESC (a NASA modified Nikon F-4 camera) mounted atop the NRL HERCULES Inertial Measurement Unit (HIMU) containing the three-axis ring-laser gyro.

  13. Validation of Suomi-NPP VIIRS sea ice concentration with very high-resolution satellite and airborne camera imagery

    NASA Astrophysics Data System (ADS)

    Baldwin, Daniel; Tschudi, Mark; Pacifici, Fabio; Liu, Yinghui

    2017-08-01

    Two independent VIIRS-based Sea Ice Concentration (SIC) products are validated against SIC as estimated from Very High Spatial Resolution Imagery for several VIIRS overpasses. The 375 m resolution VIIRS SIC from the Interface Data Processing Segment (IDPS) SIC algorithm is compared against estimates made from 2 m DigitalGlobe (DG) WorldView-2 imagery and also against estimates created from 10 cm Digital Mapping System (DMS) camera imagery. The 750 m VIIRS SIC from the Enterprise SIC algorithm is compared against DG imagery. The IDPS vs. DG comparisons reveal that, due to algorithm issues, many of the IDPS SIC retrievals were falsely assigned ice-free values when the pixel was clearly over ice. These false values increased the validation bias and RMS statistics. The IDPS vs. DMS comparisons were largely over ice-covered regions and did not demonstrate the false retrieval issue. The validation results show that products from both the IDPS and Enterprise algorithms were within or very close to the 10% accuracy (bias) specifications in both the non-melting and melting conditions, but only products from the Enterprise algorithm met the 25% specifications for the uncertainty (RMS).

  14. Mountain pine beetle detection and monitoring: evaluation of airborne imagery

    NASA Astrophysics Data System (ADS)

    Roberts, A.; Bone, C.; Dragicevic, S.; Ettya, A.; Northrup, J.; Reich, R.

    2007-10-01

    The processing and evaluation of digital airborne imagery for detection, monitoring and modeling of mountain pine beetle (MPB) infestations is evaluated. The most efficient and reliable remote sensing strategy for identification and mapping of infestation stages ("current" to "red" to "grey" attack) of MPB in lodgepole pine forests is determined for the most practical and cost effective procedures. This research was planned to specifically enhance knowledge by determining the remote sensing imaging systems and analytical procedures that optimize resource management for this critical forest health problem. Within the context of this study, airborne remote sensing of forest environments for forest health determinations (MPB) is most suitably undertaken using multispectral digitally converted imagery (aerial photography) at scales of 1:8000 for early detection of current MPB attack and 1:16000 for mapping and sequential monitoring of red and grey attack. Digital conversion should be undertaken at 10 to 16 microns for B&W multispectral imagery and 16 to 24 microns for colour and colour infrared imagery. From an "operational" perspective, the use of twin mapping-cameras with colour and B&W or colour infrared film will provide the best approximation of multispectral digital imagery with near comparable performance in a competitive private sector context (open bidding).

  15. High Resolution Airborne Digital Imagery for Precision Agriculture

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley R.

    1998-01-01

    The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).

  16. Improved TDEM formation using fused ladar/digital imagery from a low-cost small UAV

    NASA Astrophysics Data System (ADS)

    Khatiwada, Bikalpa; Budge, Scott E.

    2017-05-01

    Formation of a Textured Digital Elevation Model (TDEM) has been useful in many applications in the fields of agriculture, disaster response, terrain analysis and more. Use of a low-cost small UAV system with a texel camera (fused lidar/digital imagery) can significantly reduce the cost compared to conventional aircraft-based methods. This paper reports continued work on this problem reported in a previous paper by Bybee and Budge, and reports improvements in performance. A UAV fitted with a texel camera is flown at a fixed height above the terrain and swaths of texel image data of the terrain below is taken continuously. Each texel swath has one or more lines of lidar data surrounded by a narrow strip of EO data. Texel swaths are taken such that there is some overlap from one swath to its adjacent swath. The GPS/IMU fitted on the camera also give coarse knowledge of attitude and position. Using this coarse knowledge and the information from the texel image, the error in the camera position and attitude is reduced which helps in producing an accurate TDEM. This paper reports improvements in the original work by using multiple lines of lidar data per swath. The final results are shown and analyzed for numerical accuracy.

  17. Hierarchical object-based classification of ultra-high-resolution digital mapping camera (DMC) imagery for rangeland mapping and assessment

    USDA-ARS?s Scientific Manuscript database

    Ultra high resolution digital aerial photography has great potential to complement or replace ground measurements of vegetation cover for rangeland monitoring and assessment. We investigated object-based image analysis (OBIA) techniques for classifying vegetation in southwestern U.S. arid rangelands...

  18. Applications and Innovations for Use of High Definition and High Resolution Digital Motion Imagery in Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2016-01-01

    The first live High Definition Television (HDTV) from a spacecraft was in November, 2006, nearly ten years before the 2016 SpaceOps Conference. Much has changed since then. Now, live HDTV from the International Space Station (ISS) is routine. HDTV cameras stream live video views of the Earth from the exterior of the ISS every day on UStream, and HDTV has even flown around the Moon on a Japanese Space Agency spacecraft. A great deal has been learned about the operations applicability of HDTV and high resolution imagery since that first live broadcast. This paper will discuss the current state of real-time and file based HDTV and higher resolution video for space operations. A potential roadmap will be provided for further development and innovations of high-resolution digital motion imagery, including gaps in technology enablers, especially for deep space and unmanned missions. Specific topics to be covered in the paper will include: An update on radiation tolerance and performance of various camera types and sensors and ramifications on the future applicability of these types of cameras for space operations; Practical experience with downlinking very large imagery files with breaks in link coverage; Ramifications of larger camera resolutions like Ultra-High Definition, 6,000 [pixels] and 8,000 [pixels] in space applications; Enabling technologies such as the High Efficiency Video Codec, Bundle Streaming Delay Tolerant Networking, Optical Communications and Bayer Pattern Sensors and other similar innovations; Likely future operations scenarios for deep space missions with extreme latency and intermittent communications links.

  19. Remote camera observations of lava dome growth at Mount St. Helens, Washington, October 2004 to February 2006: Chapter 11 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.

  20. Analyzing RCD30 Oblique Performance in a Production Environment

    NASA Astrophysics Data System (ADS)

    Soler, M. E.; Kornus, W.; Magariños, A.; Pla, M.

    2016-06-01

    In 2014 the Institut Cartogràfic i Geològic de Catalunya (ICGC) decided to incorporate digital oblique imagery in its portfolio in response to the growing demand for this product. The reason can be attributed to its useful applications in a wide variety of fields and, most recently, to an increasing interest in 3d modeling. The selection phase for a digital oblique camera led to the purchase of the Leica RCD30 Oblique system, an 80MPixel multispectral medium-format camera which consists of one Nadir camera and four oblique viewing cameras acquiring images at an off-Nadir angle of 35º. The system also has a multi-directional motion compensation on-board system to deliver the highest image quality. The emergence of airborne oblique cameras has run in parallel to the inclusion of computer vision algorithms into the traditional photogrammetric workflows. Such algorithms rely on having multiple views of the same area of interest and take advantage of the image redundancy for automatic feature extraction. The multiview capability is highly fostered by the use of oblique systems which capture simultaneously different points of view for each camera shot. Different companies and NMAs have started pilot projects to assess the capabilities of the 3D mesh that can be obtained using correlation techniques. Beyond a software prototyping phase, and taking into account the currently immature state of several components of the oblique imagery workflow, the ICGC has focused on deploying a real production environment with special interest on matching the performance and quality of the existing production lines based on classical Nadir images. This paper introduces different test scenarios and layouts to analyze the impact of different variables on the geometric and radiometric performance. Different variables such as flight altitude, side and forward overlap and ground control point measurements and location have been considered for the evaluation of aerial triangulation and stereo plotting. Furthermore, two different flight configurations have been designed to measure the quality of the absolute radiometric calibration and the resolving power of the system. To quantify the effective resolution power of RCD30 Oblique images, a tool based on the computation of the Line Spread Function has been developed. The tool processes a region of interest that contains a single contour in order to extract a numerical measure of edge smoothness for a same flight session. The ICGC is highly devoted to derive information from satellite and airborne multispectral remote sensing imagery. A seamless Normalized Difference Vegetation Index (NDVI) retrieved from Digital Metric Camera (DMC) reflectance imagery is one of the products of ICGC's portfolio. As an evolution of this well-defined product, this paper presents an evaluation of the absolute radiometric calibration of the RCD30 Oblique sensor. To assess the quality of the measure, the ICGC has developed a procedure based on simultaneous acquisition of RCD30 Oblique imagery and radiometric calibrated AISA (Airborne Hyperspectral Imaging System) imagery.

  1. Solar-Powered Airplane with Cameras and WLAN

    NASA Technical Reports Server (NTRS)

    Higgins, Robert G.; Dunagan, Steve E.; Sullivan, Don; Slye, Robert; Brass, James; Leung, Joe G.; Gallmeyer, Bruce; Aoyagi, Michio; Wei, Mei Y.; Herwitz, Stanley R.; hide

    2004-01-01

    An experimental airborne remote sensing system includes a remotely controlled, lightweight, solar-powered airplane (see figure) that carries two digital-output electronic cameras and communicates with a nearby ground control and monitoring station via a wireless local-area network (WLAN). The speed of the airplane -- typically <50 km/h -- is low enough to enable loitering over farm fields, disaster scenes, or other areas of interest to collect high-resolution digital imagery that could be delivered to end users (e.g., farm managers or disaster-relief coordinators) in nearly real time.

  2. Removal of instrument signature from Mariner 9 television images of Mars

    NASA Technical Reports Server (NTRS)

    Green, W. B.; Jepsen, P. L.; Kreznar, J. E.; Ruiz, R. M.; Schwartz, A. A.; Seidman, J. B.

    1975-01-01

    The Mariner 9 spacecraft was inserted into orbit around Mars in November 1971. The two vidicon camera systems returned over 7300 digital images during orbital operations. The high volume of returned data and the scientific objectives of the Television Experiment made development of automated digital techniques for the removal of camera system-induced distortions from each returned image necessary. This paper describes the algorithms used to remove geometric and photometric distortions from the returned imagery. Enhancement processing of the final photographic products is also described.

  3. An earth imaging camera simulation using wide-scale construction of reflectance surfaces

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk

    2013-10-01

    Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.

  4. Computer image processing - The Viking experience. [digital enhancement techniques

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  5. High resolution multispectral photogrammetric imagery: enhancement, interpretation and evaluations

    NASA Astrophysics Data System (ADS)

    Roberts, Arthur; Haefele, Martin; Bostater, Charles; Becker, Thomas

    2007-10-01

    A variety of aerial mapping cameras were adapted and developed into simulated multiband digital photogrammetric mapping systems. Direct digital multispectral, two multiband cameras (IIS 4 band and Itek 9 band) and paired mapping and reconnaissance cameras were evaluated for digital spectral performance and photogrammetric mapping accuracy in an aquatic environment. Aerial films (24cm X 24cm format) tested were: Agfa color negative and extended red (visible and near infrared) panchromatic, and; Kodak color infrared and B&W (visible and near infrared) infrared. All films were negative processed to published standards and digitally converted at either 16 (color) or 10 (B&W) microns. Excellent precision in the digital conversions was obtained with scanning errors of less than one micron. Radiometric data conversion was undertaken using linear density conversion and centered 8 bit histogram exposure. This resulted in multiple 8 bit spectral image bands that were unaltered (not radiometrically enhanced) "optical count" conversions of film density. This provided the best film density conversion to a digital product while retaining the original film density characteristics. Data covering water depth, water quality, surface roughness, and bottom substrate were acquired using different measurement techniques as well as different techniques to locate sampling points on the imagery. Despite extensive efforts to obtain accurate ground truth data location errors, measurement errors, and variations in the correlation between water depth and remotely sensed signal persisted. These errors must be considered endemic and may not be removed through even the most elaborate sampling set up. Results indicate that multispectral photogrammetric systems offer improved feature mapping capability.

  6. Geocam Space: Enhancing Handheld Digital Camera Imagery from the International Space Station for Research and Applications

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.; Lee, Yeon Jin; Dille, Michael

    2016-01-01

    Handheld astronaut photography of the Earth has been collected from the International Space Station (ISS) since 2000, making it the most temporally extensive remotely sensed dataset from this unique Low Earth orbital platform. Exclusive use of digital handheld cameras to perform Earth observations from the ISS began in 2004. Nadir viewing imagery is constrained by the inclined equatorial orbit of the ISS to between 51.6 degrees North and South latitude, however numerous oblique images of land surfaces above these latitudes are included in the dataset. While unmodified commercial off-the-shelf digital cameras provide only visible wavelength, three-band spectral information of limited quality current cameras used with long (400+ mm) lenses can obtain high quality spatial information approaching 2 meters/ground pixel resolution. The dataset is freely available online at the Gateway to Astronaut Photography of Earth site (http://eol.jsc.nasa.gov), and now comprises over 2 million images. Despite this extensive image catalog, use of the data for scientific research, disaster response, commercial applications and visualizations is minimal in comparison to other data collected from free-flying satellite platforms such as Landsat, Worldview, etc. This is due primarily to the lack of fully-georeferenced data products - while current digital cameras typically have integrated GPS, this does not function in the Low Earth Orbit environment. The Earth Science and Remote Sensing (ESRS) Unit at NASA Johnson Space Center provides training in Earth Science topics to ISS crews, performs daily operations and Earth observation target delivery to crews through the Crew Earth Observations (CEO) Facility on board ISS, and also catalogs digital handheld imagery acquired from orbit by manually adding descriptive metadata and determining an image geographic centerpoint using visual feature matching with other georeferenced data, e.g. Landsat, Google Earth, etc. The lack of full geolocation information native to the data makes it difficult to integrate astronaut photographs with other georeferenced data to facilitate quantitative analysis such as urban land cover/land use classification, change detection, or geologic mapping. The manual determination of image centerpoints is both time and labor-intensive, leading to delays in releasing geolocated and cataloged data to the public, such as the timely use of data for disaster response. The GeoCam Space project was funded by the ISS Program in 2015 to develop an on-orbit hardware and ground-based software system for increasing the efficiency of geolocating astronaut photographs from the ISS (Fig. 1). The Intelligent Robotics Group at NASA Ames Research Center leads the development of both the ground and on-orbit systems in collaboration with the ESRS Unit. The hardware component consists of modified smartphone elements including cameras, central processing unit, wireless Ethernet, and an inertial measurement unit (gyroscopes/accelerometers/magnetometers) reconfigured into a compact unit that attaches to the base of the current Nikon D4 camera - and its replacement, the Nikon D5 - and connects using the standard Nikon peripheral connector or USB port. This provides secondary, side and downward facing cameras perpendicular to the primary camera pointing direction. The secondary cameras observe calibration targets with known internal X, Y, and Z position affixed to the interior of the ISS to determine the camera pose corresponding to each image frame. This information is recorded by the GeoCam Space unit and indexed for correlation to the camera time recorded for each image frame. Data - image, EXIF header, and camera pose information - is transmitted to the ground software system (GeoRef) using the established Ku-band USOS downlink system. Following integration on the ground, the camera pose information provides an initial geolocation estimate for the individual film frame. This new capability represents a significant advance in geolocation from the manual feature-matching approach for both nadir and off-nadir viewing imagery. With the initial geolocation estimate, full georeferencing of an image is completed using the rapid tie-pointing interface in GeoRef, and the resulting data is added to the Gateway to Astronaut Photography of Earth online database in both Geotiff and Keyhole Markup Language (kml) formats. The integration of the GeoRef software component of Geocam Space into the CEO image cataloging workflow is complete, and disaster response imagery acquired by the ISS crew is now fully georeferenced as a standard data product. The on-orbit hardware component (GeoSens) is in final prototyping phase, and is on-schedule for launch to the ISS in late 2016. Installation and routine use of the Geocam Space system for handheld digital camera photography from the ISS is expected to significantly improve the usefulness of this unique dataset for a variety of public- and private-sector applications.

  7. Integration of Geodata in Documenting Castle Ruins

    NASA Astrophysics Data System (ADS)

    Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.

    2016-06-01

    Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.

  8. Evaluating planetary digital terrain models-The HRSC DTM test

    USGS Publications Warehouse

    Heipke, C.; Oberst, J.; Albertz, J.; Attwenger, M.; Dorninger, P.; Dorrer, E.; Ewe, M.; Gehrke, S.; Gwinner, K.; Hirschmuller, H.; Kim, J.R.; Kirk, R.L.; Mayer, H.; Muller, Jan-Peter; Rengarajan, R.; Rentsch, M.; Schmidt, R.; Scholten, F.; Shan, J.; Spiegel, M.; Wahlisch, M.; Neukum, G.

    2007-01-01

    The High Resolution Stereo Camera (HRSC) has been orbiting the planet Mars since January 2004 onboard the European Space Agency (ESA) Mars Express mission and delivers imagery which is being used for topographic mapping of the planet. The HRSC team has conducted a systematic inter-comparison of different alternatives for the production of high resolution digital terrain models (DTMs) from the multi look HRSC push broom imagery. Based on carefully chosen test sites the test participants have produced DTMs which have been subsequently analysed in a quantitative and a qualitative manner. This paper reports on the results obtained in this test. ?? 2007 Elsevier Ltd. All rights reserved.

  9. Portable, stand-off spectral imaging camera for detection of effluents and residues

    NASA Astrophysics Data System (ADS)

    Goldstein, Neil; St. Peter, Benjamin; Grot, Jonathan; Kogan, Michael; Fox, Marsha; Vujkovic-Cvijin, Pajo; Penny, Ryan; Cline, Jason

    2015-06-01

    A new, compact and portable spectral imaging camera, employing a MEMs-based encoded imaging approach, has been built and demonstrated for detection of hazardous contaminants including gaseous effluents and solid-liquid residues on surfaces. The camera is called the Thermal infrared Reconfigurable Analysis Camera for Effluents and Residues (TRACER). TRACER operates in the long wave infrared and has the potential to detect a wide variety of materials with characteristic spectral signatures in that region. The 30 lb. camera is tripod mounted and battery powered. A touch screen control panel provides a simple user interface for most operations. The MEMS spatial light modulator is a Texas Instruments Digital Microarray Array with custom electronics and firmware control. Simultaneous 1D-spatial and 1Dspectral dimensions are collected, with the second spatial dimension obtained by scanning the internal spectrometer slit. The sensor can be configured to collect data in several modes including full hyperspectral imagery using Hadamard multiplexing, panchromatic thermal imagery, and chemical-specific contrast imagery, switched with simple user commands. Matched filters and other analog filters can be generated internally on-the-fly and applied in hardware, substantially reducing detection time and improving SNR over HSI software processing, while reducing storage requirements. Results of preliminary instrument evaluation and measurements of flame exhaust are presented.

  10. A New Digital Imaging and Analysis System for Plant and Ecosystem Phenological Studies

    NASA Astrophysics Data System (ADS)

    Ramirez, G.; Ramirez, G. A.; Vargas, S. A., Jr.; Luna, N. R.; Tweedie, C. E.

    2015-12-01

    Over the past decade, environmental scientists have increasingly used low-cost sensors and custom software to gather and analyze environmental data. Included in this trend has been the use of imagery from field-mounted static digital cameras. Published literature has highlighted the challenge scientists have encountered with poor and problematic camera performance and power consumption, limited data download and wireless communication options, general ruggedness of off the shelf camera solutions, and time consuming and hard-to-reproduce digital image analysis options. Data loggers and sensors are typically limited to data storage in situ (requiring manual downloading) and/or expensive data streaming options. Here we highlight the features and functionality of a newly invented camera/data logger system and coupled image analysis software suited to plant and ecosystem phenological studies (patent pending). The camera has resulted from several years of development and prototype testing supported by several grants funded by the US NSF. These inventions have several unique features and functionality and have been field tested in desert, arctic, and tropical rainforest ecosystems. The system can be used to acquire imagery/data from static and mobile platforms. Data is collected, preprocessed, and streamed to the cloud without the need of an external computer and can run for extended time periods. The camera module is capable of acquiring RGB, IR, and thermal (LWIR) data and storing it in a variety of formats including RAW. The system is full customizable with a wide variety of passive and smart sensors. The camera can be triggered by state conditions detected by sensors and/or select time intervals. The device includes USB, Wi-Fi, Bluetooth, serial, GSM, Ethernet, and Iridium connections and can be connected to commercial cloud servers such as Dropbox. The complementary image analysis software is compatible with all popular operating systems. Imagery can be viewed and analyzed in RGB, HSV, and l*a*b color space. Users can select a spectral index, which have been derived from published literature and/or choose to have analytical output reported as separate channel strengths for a given color space. Results of the analysis can be viewed in a plot and/or saved as a .csv file for additional analysis and visualization.

  11. Get the Picture?

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Positive Systems has worked in conjunction with Stennis Space Center to design the ADAR System 5500. This is a four-band airborne digital imaging system used to capture multispectral imagery similar to that available from satellite platforms such as Landsat, SPOT and the new generation of high resolution satellites. Positive Systems has provided remote sensing services for the development of digital aerial camera systems and software for commercial aerial imaging applications.

  12. Increasing Visual Literacy Skills with Digital Imagery: Successful Models for Using a Set of Digital Cameras in a College of Education

    ERIC Educational Resources Information Center

    Wilhelm, Lance

    2005-01-01

    The use of images is becoming more pervasive in modern culture, and schools must adapt their curricula and instructional practices accordingly. Visual literacy is becoming more important from a curricular standpoint as society relies to a greater degree on images and visual communication strategies. Thus, in order for students to be marketable in…

  13. Seasonal variations of leaf and canopy properties tracked by ground-based NDVI imagery in a temperate forest.

    PubMed

    Yang, Hualei; Yang, Xi; Heskel, Mary; Sun, Shucun; Tang, Jianwu

    2017-04-28

    Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporal resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). We found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.

  14. AN ASSESSMENT OF GROUND TRUTH VARIABILITY USING A "VIRTUAL FIELD REFERENCE DATABASE"

    EPA Science Inventory



    A "Virtual Field Reference Database (VFRDB)" was developed from field measurment data that included location and time, physical attributes, flora inventory, and digital imagery (camera) documentation foy 1,01I sites in the Neuse River basin, North Carolina. The sampling f...

  15. Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.

    2012-01-01

    As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.

  16. Evaluation of the MSFC facsimile camera system as a tool for extraterrestrial geologic exploration

    NASA Technical Reports Server (NTRS)

    Wolfe, E. W.; Alderman, J. D.

    1971-01-01

    Utility of the Marshall Space Flight (MSFC) facsimile camera system for extraterrestrial geologic exploration was investigated during the spring of 1971 near Merriam Crater in northern Arizona. Although the system with its present hard-wired recorder operates erratically, the imagery showed that the camera could be developed as a prime imaging tool for automated missions. Its utility would be enhanced by development of computer techniques that utilize digital camera output for construction of topographic maps, and it needs increased resolution for examining near field details. A supplementary imaging system may be necessary for hand specimen examination at low magnification.

  17. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  18. APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES

    EPA Science Inventory

    An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
    (LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...

  19. Chosen Aspects of the Production of the Basic Map Using Uav Imagery

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Fryskowska, A.; Wierzbicki, D.; Nerc, P.

    2016-06-01

    For several years there has been an increasing interest in the use of unmanned aerial vehicles in acquiring image data from a low altitude. Considering the cost-effectiveness of the flight time of UAVs vs. conventional airplanes, the use of the former is advantageous when generating large scale accurate ortophotos. Through the development of UAV imagery, we can update large-scale basic maps. These maps are cartographic products which are used for registration, economic, and strategic planning. On the basis of these maps other cartographic maps are produced, for example maps used building planning. The article presents an assessesment of the usefulness of orthophotos based on UAV imagery to upgrade the basic map. In the research a compact, non-metric camera, mounted on a fixed wing powered by an electric motor was used. The tested area covered flat, agricultural and woodland terrains. The processing and analysis of orthorectification were carried out with the INPHO UASMaster programme. Due to the effect of UAV instability on low-altitude imagery, the use of non-metric digital cameras and the low-accuracy GPS-INS sensors, the geometry of images is visibly lower were compared to conventional digital aerial photos (large values of phi and kappa angles). Therefore, typically, low-altitude images require large along- and across-track direction overlap - usually above 70 %. As a result of the research orthoimages were obtained with a resolution of 0.06 meters and a horizontal accuracy of 0.10m. Digitized basic maps were used as the reference data. The accuracy of orthoimages vs. basic maps was estimated based on the study and on the available reference sources. As a result, it was found that the geometric accuracy and interpretative advantages of the final orthoimages allow the updating of basic maps. It is estimated that such an update of basic maps based on UAV imagery reduces processing time by approx. 40%.

  20. Emergency Response Imagery Related to Hurricanes Harvey, Irma, and Maria

    NASA Astrophysics Data System (ADS)

    Worthem, A. V.; Madore, B.; Imahori, G.; Woolard, J.; Sellars, J.; Halbach, A.; Helmricks, D.; Quarrick, J.

    2017-12-01

    NOAA's National Geodetic Survey (NGS) and Remote Sensing Division acquired and rapidly disseminated emergency response imagery related to the three recent hurricanes Harvey, Irma, and Maria. Aerial imagery was collected using a Trimble Digital Sensor System, a high-resolution digital camera, by means of NOAA's King Air 350ER and DeHavilland Twin Otter (DHC-6) Aircraft. The emergency response images are used to assess the before and after effects of the hurricanes' damage. The imagery aids emergency responders, such as FEMA, Coast Guard, and other state and local governments, in developing recovery strategies and efforts by prioritizing areas most affected and distributing appropriate resources. Collected imagery is also used to provide damage assessment for use in long-term recovery and rebuilding efforts. Additionally, the imagery allows for those evacuated persons to see images of their homes and neighborhoods remotely. Each of the individual images are processed through ortho-rectification and merged into a uniform mosaic image. These remotely sensed datasets are publically available, and often used by web-based map servers as well as, federal, state, and local government agencies. This poster will show the imagery collected for these three hurricanes and the processes involved in getting data quickly into the hands of those that need it most.

  1. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  2. Seasonal variations of leaf and canopy properties tracked by ground-based NDVI imagery in a temperate forest

    DOE PAGES

    Yang, Hualei; Yang, Xi; Heskel, Mary; ...

    2017-04-28

    Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less

  3. Seasonal variations of leaf and canopy properties tracked by ground-based NDVI imagery in a temperate forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Hualei; Yang, Xi; Heskel, Mary

    Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less

  4. Lunar Reconnaissance Orbiter Camera

    Science.gov Websites

    them out » Traverse featurette Traverse the Apollo Landing Sites & More. By combining LROC imagery , data, and historical data, we've created detailed, interactive maps of the Apollo Landing Sites and taken by the original Apollo crews. ASU maintains the Apollo Digital Image Archive and the March to the

  5. BOREAS Level-0 C-130 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Dominguez, Roseanne; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), C-130 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The NASA C-130 Earth Resources aircraft can accommodate two mapping cameras during flight, each of which can be fitted with 6- or 12-inch focal-length lenses and black-and-white, natural-color, or color-IR film, depending upon requirements. Both cameras were often in operation simultaneously, although sometimes only the lower resolution camera was deployed. When both cameras were in operation, the higher resolution camera was often used in a more limited fashion. The acquired photography covers the period of April to September 1994. The aerial photography was delivered as rolls of large format (9 x 9 inch) color transparency prints, with imagery from multiple missions (hundreds of prints) often contained within a single roll. A total of 1533 frames were collected from the C-130 platform for BOREAS in 1994. Note that the level-0 C-130 transparencies are not contained on the BOREAS CD-ROM set. An inventory file is supplied on the CD-ROM to inform users of all the data that were collected. Some photographic prints were made from the transparencies. In addition, BORIS staff digitized a subset of the tranparencies and stored the images in JPEG format. The CD-ROM set contains a small subset of the collected aerial photography that were the digitally scanned and stored as JPEG files for most tower and auxiliary sites in the NSA and SSA. See Section 15 for information about how to acquire additional imagery.

  6. NASA Imaging for Safety, Science, and History

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Lindblom, Walt; Bowerman, Deborah S. (Technical Monitor)

    2002-01-01

    Since its creation in 1958 NASA has been making and documenting history, both on Earth and in space. To complete its missions NASA has long relied on still and motion imagery to document spacecraft performance, see what can't be seen by the naked eye, and enhance the safety of astronauts and expensive equipment. Today, NASA is working to take advantage of new digital imagery technologies and techniques to make its missions more safe and efficient. An HDTV camera was on-board the International Space Station from early August, to mid-December, 2001. HDTV cameras previously flown have had degradation in the CCD during the short duration of a Space Shuttle flight. Initial performance assessment of the CCD during the first-ever long duration space flight of a HDTV camera and earlier flights is discussed. Recent Space Shuttle launches have been documented with HDTV cameras and new long lenses giving clarity never before seen with video. Examples and comparisons will be illustrated between HD, highspeed film, and analog video of these launches and other NASA tests. Other uses of HDTV where image quality is of crucial importance will also be featured.

  7. Monitoring the spatial and temporal evolution of slope instability with Digital Image Correlation

    NASA Astrophysics Data System (ADS)

    Manconi, Andrea; Glueer, Franziska; Loew, Simon

    2017-04-01

    The identification and monitoring of ground deformation is important for an appropriate analysis and interpretation of unstable slopes. Displacements are usually monitored with in-situ techniques (e.g., extensometers, inclinometers, geodetic leveling, tachymeters and D-GPS), and/or active remote sensing methods (e.g., LiDAR and radar interferometry). In particular situations, however, the choice of the appropriate monitoring system is constrained by site-specific conditions. Slope areas can be very remote and/or affected by rapid surface changes, thus hardly accessible, often unsafe, for field installations. In many cases the use of remote sensing approaches might be also hindered because of unsuitable acquisition geometries, poor spatial resolution and revisit times, and/or high costs. The increasing availability of digital imagery acquired from terrestrial photo and video cameras allows us nowadays for an additional source of data. The latter can be exploited to visually identify changes of the scene occurring over time, but also to quantify the evolution of surface displacements. Image processing analyses, such as Digital Image Correlation (known also as pixel-offset or feature-tracking), have demonstrated to provide a suitable alternative to detect and monitor surface deformation at high spatial and temporal resolutions. However, a number of intrinsic limitations have to be considered when dealing with optical imagery acquisition and processing, including the effects of light conditions, shadowing, and/or meteorological variables. Here we propose an algorithm to automatically select and process images acquired from time-lapse cameras. We aim at maximizing the results obtainable from large datasets of digital images acquired with different light and meteorological conditions, and at retrieving accurate information on the evolution of surface deformation. We show a successful example of application of our approach in the Swiss Alps, more specifically in the Great Aletsch area, where slope instability was recently reactivated due to the progressive glacier retreat. At this location, time-lapse cameras have been installed during the last two years, ranging from low-cost and low-resolution webcams to more expensive high-resolution reflex cameras. Our results confirm that time-lapse cameras provide quantitative and accurate measurements of surface deformation evolution over space and time, especially in situations when other monitoring instruments fail.

  8. Optronic System Imaging Simulator (OSIS): imager simulation tool of the ECOMOS project

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2018-04-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD). In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations. For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager. Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.

  9. Laser Digital Cinema

    NASA Astrophysics Data System (ADS)

    Takeuchi, Eric B.; Flint, Graham W.; Bergstedt, Robert; Solone, Paul J.; Lee, Dicky; Moulton, Peter F.

    2001-03-01

    Electronic cinema projectors are being developed that use a digital micromirror device (DMDTM) to produce the image. Photera Technologies has developed a new architecture that produces truly digital imagery using discrete pulse trains of red, green, and blue light in combination with a DMDTM where in the number of pulses that are delivered to the screen during a given frame can be defined in a purely digital fashion. To achieve this, a pulsed RGB laser technology pioneered by Q-Peak is combined with a novel projection architecture that we refer to as Laser Digital CameraTM. This architecture provides imagery wherein, during the time interval of each frame, individual pixels on the screen receive between zero and 255 discrete pulses of each color; a circumstance which yields 24-bit color. Greater color depth, or increased frame rate is achievable by increasing the pulse rate of the laser. Additionally, in the context of multi-screen theaters, a similar architecture permits our synchronously pulsed RGB source to simultaneously power three screens in a color sequential manner; thereby providing an efficient use of photons, together with the simplifications which derive from using a single DMDTM chip in each projector.

  10. Improving Measurement of Forest Structural Parameters by Co-Registering of High Resolution Aerial Imagery and Low Density LiDAR Data

    PubMed Central

    Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan

    2009-01-01

    Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data. PMID:22573971

  11. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  12. Development of the SEASIS instrument for SEDSAT

    NASA Technical Reports Server (NTRS)

    Maier, Mark W.

    1996-01-01

    Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.

  13. REMOTE SENSING OF BIOMASS, LEAF-AREA-INDEX AND CHLOROPHYLL A AND B CONTENT IN THE ACE BASIN AND NATIONAL ESTUARINE RESEARCH RESERVE USING SUB-METER DIGITAL CAMERA IMAGERY. (R828677C003)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  14. Using high-resolution digital aerial imagery to map land cover

    USGS Publications Warehouse

    Dieck, J.J.; Robinson, Larry

    2014-01-01

    The Upper Midwest Environmental Sciences Center (UMESC) has used aerial photography to map land cover/land use on federally owned and managed lands for over 20 years. Until recently, that process used 23- by 23-centimeter (9- by 9-inch) analog aerial photos to classify vegetation along the Upper Mississippi River System, on National Wildlife Refuges, and in National Parks. With digital aerial cameras becoming more common and offering distinct advantages over analog film, UMESC transitioned to an entirely digital mapping process in 2009. Though not without challenges, this method has proven to be much more accurate and efficient when compared to the analog process.

  15. Automated geo/ortho registered aerial imagery product generation using the mapping system interface card (MSIC)

    NASA Astrophysics Data System (ADS)

    Bratcher, Tim; Kroutil, Robert; Lanouette, André; Lewis, Paul E.; Miller, David; Shen, Sylvia; Thomas, Mark

    2013-05-01

    The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.

  16. The Engineer Topographic Laboratories /ETL/ hybrid optical/digital image processor

    NASA Astrophysics Data System (ADS)

    Benton, J. R.; Corbett, F.; Tuft, R.

    1980-01-01

    An optical-digital processor for generalized image enhancement and filtering is described. The optical subsystem is a two-PROM Fourier filter processor. Input imagery is isolated, scaled, and imaged onto the first PROM; this input plane acts like a liquid gate and serves as an incoherent-to-coherent converter. The image is transformed onto a second PROM which also serves as a filter medium; filters are written onto the second PROM with a laser scanner in real time. A solid state CCTV camera records the filtered image, which is then digitized and stored in a digital image processor. The operator can then manipulate the filtered image using the gray scale and color remapping capabilities of the video processor as well as the digital processing capabilities of the minicomputer.

  17. Ortho-Rectification of Narrow Band Multi-Spectral Imagery Assisted by Dslr RGB Imagery Acquired by a Fixed-Wing Uas

    NASA Astrophysics Data System (ADS)

    Rau, J.-Y.; Jhan, J.-P.; Huang, C.-Y.

    2015-08-01

    Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a modified projective transformation (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at the same exposure time will have same interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) after band-to-band registration (BBR). Thus, in the aerial triangulation stage, the master band of MiniMCA-12 was treated as a reference channel to link with DSLR RGB images. It means, all reference images from the master band of MiniMCA-12 and all RGB images were triangulated at the same time with same coordinate system of ground control points (GCP). Due to the spatial resolution of RGB images is higher than the MiniMCA-12, the GCP can be marked on the RGB images only even they cannot be recognized on the MiniMCA images. Furthermore, a one meter gridded digital surface model (DSM) is created by the RGB images and applied to the MiniMCA imagery for ortho-rectification. Quantitative error analyses show that the proposed BBR scheme can achieve 0.33 pixels of average misregistration residuals length and the co-registration errors among 12 MiniMCA ortho-images and between MiniMCA and Canon RGB ortho-images are all less than 0.6 pixels. The experimental results demonstrate that the proposed method is robust, reliable and accurate for future remote sensing applications.

  18. Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.

  19. Characterization of Vegetation using the UC Davis Remote Sensing Testbed

    NASA Astrophysics Data System (ADS)

    Falk, M.; Hart, Q. J.; Bowen, K. S.; Ustin, S. L.

    2006-12-01

    Remote sensing provides information about the dynamics of the terrestrial biosphere with continuous spatial and temporal coverage on many different scales. We present the design and construction of a suite of instrument modules and network infrastructure with size, weight and power constraints suitable for small scale vehicles, anticipating vigorous growth in unmanned aerial vehicles (UAV) and other mobile platforms. Our approach provides the rapid deployment and low cost acquisition of high aerial imagery for applications requiring high spatial resolution and revisits. The testbed supports a wide range of applications, encourages remote sensing solutions in new disciplines and demonstrates the complete range of engineering knowledge required for the successful deployment of remote sensing instruments. The initial testbed is deployed on a Sig Kadet Senior remote controlled plane. It includes an onboard computer with wireless radio, GPS, inertia measurement unit, 3-axis electronic compass and digital cameras. The onboard camera is either a RGB digital camera or a modified digital camera with red and NIR channels. Cameras were calibrated using selective light sources, an integrating spheres and a spectrometer, allowing for the computation of vegetation indices such as the NDVI. Field tests to date have investigated technical challenges in wireless communication bandwidth limits, automated image geolocation, and user interfaces; as well as image applications such as environmental landscape mapping focusing on Sudden Oak Death and invasive species detection, studies on the impact of bird colonies on tree canopies, and precision agriculture.

  20. Satellite Imagery Assisted Road-Based Visual Navigation System

    NASA Astrophysics Data System (ADS)

    Volkova, A.; Gibbens, P. W.

    2016-06-01

    There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used

  1. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  2. Mapping informal small-scale mining features in a data-sparse tropical environment with a small UAS

    USGS Publications Warehouse

    Chirico, Peter G.; Dewitt, Jessica D.

    2017-01-01

    This study evaluates the use of a small unmanned aerial system (UAS) to collect imagery over artisanal mining sites in West Africa. The purpose of this study is to consider how very high-resolution imagery and digital surface models (DSMs) derived from structure-from-motion (SfM) photogrammetric techniques from a small UAS can fill the gap in geospatial data collection between satellite imagery and data gathered during field work to map and monitor informal mining sites in tropical environments. The study compares both wide-angle and narrow field of view camera systems in the collection and analysis of high-resolution orthoimages and DSMs of artisanal mining pits. The results of the study indicate that UAS imagery and SfM photogrammetric techniques permit DSMs to be produced with a high degree of precision and relative accuracy, but highlight the challenges of mapping small artisanal mining pits in remote and data sparse terrain.

  3. Remote sensing and implications for variable-rate application using agricultural aircraft

    NASA Astrophysics Data System (ADS)

    Thomson, Steven J.; Smith, Lowrey A.; Ray, Jeffrey D.; Zimba, Paul V.

    2004-01-01

    Aircraft routinely used for agricultural spray application are finding utility for remote sensing. Data obtained from remote sensing can be used for prescription application of pesticides, fertilizers, cotton growth regulators, and water (the latter with the assistance of hyperspectral indices and thermal imaging). Digital video was used to detect weeds in early cotton, and preliminary data were obtained to see if nitrogen status could be detected in early soybeans. Weeds were differentiable from early cotton at very low altitudes (65-m), with the aid of supervised classification algorithms in the ENVI image analysis software. The camera was flown at very low altitude for acceptable pixel resolution. Nitrogen status was not detectable by statistical analysis of digital numbers (DNs) obtained from images, but soybean cultivar differences were statistically discernable (F=26, p=0.01). Spectroradiometer data are being analyzed to identify narrow spectral bands that might aid in selecting camera filters for determination of plant nitrogen status. Multiple camera configurations are proposed to allow vegetative indices to be developed more readily. Both remotely sensed field images and ground data are to be used for decision-making in a proposed variable-rate application system for agricultural aircraft. For this system, prescriptions generated from digital imagery and data will be coupled with GPS-based swath guidance and programmable flow control.

  4. Calibration of Low Cost Digital Camera Using Data from Simultaneous LIDAR and Photogrammetric Surveys

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Debiasi, P.; Hainosz, F.; Centeno, J.

    2012-07-01

    Digital photogrammetric products from the integration of imagery and lidar datasets are a reality nowadays. When the imagery and lidar surveys are performed together and the camera is connected to the lidar system, a direct georeferencing can be applied to compute the exterior orientation parameters of the images. Direct georeferencing of the images requires accurate interior orientation parameters to perform photogrammetric application. Camera calibration is a procedure applied to compute the interior orientation parameters (IOPs). Calibration researches have established that to obtain accurate IOPs, the calibration must be performed with same or equal condition that the photogrammetric survey is done. This paper shows the methodology and experiments results from in situ self-calibration using a simultaneous images block and lidar dataset. The calibration results are analyzed and discussed. To perform this research a test field was fixed in an urban area. A set of signalized points was implanted on the test field to use as the check points or control points. The photogrammetric images and lidar dataset of the test field were taken simultaneously. Four strips of flight were used to obtain a cross layout. The strips were taken with opposite directions of flight (W-E, E-W, N-S and S-N). The Kodak DSC Pro SLR/c digital camera was connected to the lidar system. The coordinates of the exposition station were computed from the lidar trajectory. Different layouts of vertical control points were used in the calibration experiments. The experiments use vertical coordinates from precise differential GPS survey or computed by an interpolation procedure using the lidar dataset. The positions of the exposition stations are used as control points in the calibration procedure to eliminate the linear dependency of the group of interior and exterior orientation parameters. This linear dependency happens, in the calibration procedure, when the vertical images and flat test field are used. The mathematic correlation of the interior and exterior orientation parameters are analyzed and discussed. The accuracies of the calibration experiments are, as well, analyzed and discussed.

  5. 1985 ACSM-ASPRS Fall Convention, Indianapolis, IN, September 8-13, 1985, Technical Papers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-01-01

    Papers are presented on Landsat image data quality analysis, primary data acquisition, cartography, geodesy, land surveying, and the applications of satellite remote sensing data. Topics discussed include optical scanning and interactive color graphics; the determination of astrolatitudes and astrolongitudes using x, y, z-coordinates on the celestial sphere; raster-based contour plotting from digital elevation models using minicomputers or microcomputers; the operational techniques of the GPS when utilized as a survey instrument; public land surveying and high technology; the use of multitemporal Landsat MSS data for studying forest cover types; interpretation of satellite and aircraft L-band synthetic aperture radar imagery; geological analysismore » of Landsat MSS data; and an interactive real time digital image processing system. Consideration is given to a large format reconnaissance camera; creating an optimized color balance for TM and MSS imagery; band combination selection for visual interpretation of thematic mapper data for resource management; the effect of spatial filtering on scene noise and boundary detail in thematic mapper imagery; the evaluation of the geometric quality of thematic mapper photographic data; and the analysis and correction of Landsat 4 and 5 thematic mapper sensor data.« less

  6. Volcanic Structure of the Gakkel Ridge at 85°E

    NASA Astrophysics Data System (ADS)

    Willis, C.; Humphris, S.; Soule, S. A.; Reves-Sohn, R.; Shank, T.; Singh, H.

    2007-12-01

    We present an initial volcanologic interpretation of a magmatically-robust segment of the ultra-slow spreading (3- 7 mm/yr) Gakkel Ridge at 85°E in the eastern Arctic Basin based on surveys conducted during the July 2007 Arctic GAkkel Vents Expedition (AGAVE). A previous expedition (2001 AMORE) and seismic stations in the area found evidence for active hydrothermal circulation and seismicity that suggested volcanic activity may be ongoing at 85°E. We examine multi-beam bathymetric data, digital imagery, and rock and sediment samples in order to determine the nature of volcanic accretion that is occurring in this environment including the distribution of flow types and their relationship to features of the axial valley. Raw multi-beam bathymetric data was logged by the Kongsberg EM 120 1°x1° multi-beam echo sounder aboard the icbreaker IB Oden. Digital imagery was recorded on five video and still cameras mounted on the CAMPER fiber-optic wireline vehicle, which was towed 1-3m above the seafloor. Digital imagery was recorded on thirteen CAMPER drift-dives over interesting bathymetry including: a volcanic ridge in the axial valley named Duque's Hill, and Oden and Loke volcanoes that are part of the newly discovered Asgard volcanic chain. Talus, lava flows, and volcaniclastics were sampled with the clamshell grabber and slurp suction sampler on CAMPER. A variety of lava morphologies are identified in the imagery including large basalt pillows with buds and other surface ornamentation, lava tubes, lobates, sheet flows, and a thick cover of volcaniclastic sediment over extensive areas suggestive of explosive volcanic activity.

  7. Atmospheric Dust in the Upper Colorado River Basin: Integrated Analysis of Digital Imagery, Total Suspended Particulate, and Meteorological Data

    NASA Astrophysics Data System (ADS)

    Urban, F. E.; Reynolds, R. L.; Neff, J. C.; Fernandez, D. P.; Reheis, M. C.; Goldstein, H.; Grote, E.; Landry, C.

    2012-12-01

    Improved measurement and observation of dust emission and deposition in the American west would advance understanding of (1) landscape conditions that promote or suppress dust emission, (2) dynamics of dryland and montane ecosystems, (3) premature melting of snow cover that provides critical water supplies, and (4) possible effects of dust on human health. Such understanding can be applied to issues of land management, water-resource management, as well as the safety and well-being of urban and rural inhabitants. We have recently expanded the scope of particulate measurement in the Upper Colorado River basin through the establishment of total-suspended-particulate (TSP) measurement stations located in Utah and Colorado with bi-weekly data (filter) collection, along with protocols for characterizing dust-on-snow (DOS) layers in Colorado mountains. A sub-network of high-resolution digital cameras has been co-located with several of the TSP stations, as well as at other strategic locations. These real-time regional dust-event detection cameras are internet-based and collect digital imagery every 6-15 minutes. Measurements of meteorological conditions to support these collections and observations are provided partly by CLIM-MET stations, four of which were deployed in 1998 in the Canyonlands (Utah) region. These stations provide continuous, near real-time records of the complex interaction of wind, precipitation, vegetation, as well as dust emission and deposition, in different land-use settings. The complementary datasets of dust measurement and observation enable tracking of individual regional dust events. As an example, the first DOS event of water year 2012 (Nov 5, 2011), as documented at Senator Beck Basin, near Silverton, Colorado, was also recorded by the camera at Island-in-the-Sky (200 km to the northwest), as well as in aeolian activity and wind data from the Dugout Ranch CLIM-MET station (170 km to the west-northwest). At these sites, strong winds and the presence of dense dust preceded precipitation. Similar conditions and results were recorded in many subsequent water year 2012 DOS events, with complementary quantification in TSP dust-flux records. Spring 2012 included several intense dry (no associated precipitation) regional dust events that occurred after snowmelt. These events during May 25-26, especially, are clearly evident in the imagery, TSP, and local meteorological data.

  8. Time-lapse imagery of the breaching of Marmot Dam, Oregon, and subsequent erosion of sediment by the Sandy River, October 2007 to May 2008

    USGS Publications Warehouse

    Major, Jon J.; Spicer, Kurt R.; Collins, Rebecca A.

    2010-01-01

    In 2007, Marmot Dam on the Sandy River, Oregon, was removed and a temporary cofferdam standing in its place was breached, allowing the river to flow freely along its entire length. Time-lapse imagery obtained from a network of digital single-lens reflex cameras placed around the lower reach of the sediment-filled reservoir behind the dam details rapid erosion of sediment by the Sandy River after breaching of the cofferdam. Within hours of the breaching, the Sandy River eroded much of the nearly 15-m-thick frontal part of the sediment wedge impounded behind the former concrete dam; within 24-60 hours it eroded approximately 125,000 m3 of sediment impounded in the lower 300-meter-reach of the reservoir. The imagery shows that the sediment eroded initially through vertical incision, but that lateral erosion rapidly became an important process.

  9. Seeing Earth Through the Eyes of an Astronaut

    NASA Technical Reports Server (NTRS)

    Dawson, Melissa

    2014-01-01

    The Human Exploration Science Office within the ARES Directorate has undertaken a new class of handheld camera photographic observations of the Earth as seen from the International Space Station (ISS). For years, astronauts have attempted to describe their experience in space and how they see the Earth roll by below their spacecraft. Thousands of crew photographs have documented natural features as diverse as the dramatic clay colors of the African coastline, the deep blues of the Earth's oceans, or the swirling Aurora Borealis of Australia in the upper atmosphere. Dramatic recent improvements in handheld digital single-lens reflex (DSLR) camera capabilities are now allowing a new field of crew photography: night time-lapse imagery.

  10. Sunglint in Florida Bay taken by the Expedition Two crew

    NASA Image and Video Library

    2001-04-13

    ISS002-E-5466 (13 April 2001) --- From the International Space Station (ISS), an Expedition Two crew member photographed southern Florida, including Dade County with Miami and Miami Beach; Everglades National Park; Big Cypress National Reserve; and the Florida Keys and many other recognizable areas. The crew member, using a digital still camera on this same pass, also recorded imagery of the Lake Okeechobee area, just north of the area represented in this frame.

  11. Commercial vs professional UAVs for mapping

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos G.; Koukouvelas, Ioannis

    2017-09-01

    The continuous advancements in the technology behind Unmanned Aerial Vehicles (UAVs), in accordance with the consecutive decrease to their cost and the availability of photogrammetric software, make the use of UAVs an excellent tool for large scale mapping. In addition with the use of UAVs, the problems of increased costs, time consumption and the possible terrain accessibility problems, are significantly reduced. However, despite the growing number of UAV applications there has been a little quantitative assessment of UAV performance and of the quality of the derived products (orthophotos and Digital Surface Models). Here, we present results from field experiments designed to evaluate the accuracy of photogrammetrically-derived digital surface models (DSM) developed from imagery acquired with onboard digital cameras. We also show the comparison of the high resolution vs moderate resolution imagery for largescale geomorphic mapping. The acquired data analyzed in this study comes from a small commercial and a professional UAV. The test area was mapped using the same photogrammetric grid by the two UAVs. 3D models, DSMs and orthophotos were created using special software. Those products were compared to in situ survey measurements and the results are presented in this paper.

  12. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  13. History and use of remote sensing for conservation and management of federal lands in Alaska, USA

    USGS Publications Warehouse

    Markon, Carl

    1995-01-01

    Remote sensing has been used to aid land use planning efforts for federal public lands in Alaska since the 1940s. Four federal land management agencies-the U.S. Fish and Wildlife Service, US. Bureau of Land Management, US. National Park Service, and U.S. Forest Service-have used aerial photography and satellite imagery to document the extent, type, and condition of Alaska's natural resources. Aerial photographs have been used to collect detailed information over small to medium-sized areas. This standard management tool is obtainable using equipment ranging from hand-held 35-mm cameras to precision metric mapping cameras. Satellite data, equally important, provide synoptic views of landscapes, are digitally manipulatable, and are easily merged with other digital databases. To date, over 109.2 million ha (72%) of Alaska's land cover have been mapped via remote sensing. This information has provided a base for conservation, management, and planning on federal public lands in Alaska.

  14. Recent improvements in hydrometeor sampling using airborne holography

    NASA Astrophysics Data System (ADS)

    Stith, J. L.; Bansemer, A.; Glienke, S.; Shaw, R. A.; Aquino, J.; Fugal, J. P.

    2017-12-01

    Airborne digital holography provides a new technique to study the sizes, shapes and locations of hydrometeors. Airborne holographic cameras are able to capture more optical information than traditional airborne hydrometeor instruments, which allows for more detailed information, such as the location and shape of individual hydrometeors over a relatively wide range of sizes. These cameras can be housed in an anti-shattering probe arm configuration, which minimizes the effects of probe tip shattering. Holographic imagery, with its three dimensional view of hydrometeor spacing, is also well suited to detecting shattering events when present. A major problem with digital holographic techniques has been the amount of machine time and human analysis involved in analyzing holographic data. Here, we present some recent examples showing how holographic analysis can improve our measurements of liquid and ice particles and we describe a format we have developed for routine archiving of Holographic data, so that processed results can be utilized more routinely by a wider group of investigators. We present a side-by-side comparison of the imagery obtained from holographic reconstruction of ice particles from a holographic camera (HOLODEC) with imagery from a 3VCPI instrument, which utilizes a tube-based sampling geometry. Both instruments were carried on the NSF/NCAR GV aircraft. In a second application of holographic imaging, we compare measurements of cloud droplets from a Cloud Droplet Probe (CDP) with simultaneous measurements from HOLODEC. In some cloud regions the CDP data exhibits a bimodal size distribution, while the more local data from HOLODEC suggests that two mono-modal size distributions are present in the cloud and that the bimodality observed in the CDP is due to the averaging length. Thus, the holographic techniques have the potential to improve our understanding of the warm rain process in future airborne field campaigns. The development of this instrument has been a university and national lab collaboration. Progress in automating the processing techniques has now reached a stage where processed data can be made readily available, so that holographic data from a field campaign can be utilized by a wider group of investigators.

  15. Camera perspective bias in videotaped confessions: experimental evidence of its perceptual basis.

    PubMed

    Ratcliff, Jennifer J; Lassiter, G Daniel; Schmidt, Heather C; Snyder, Celeste J

    2006-12-01

    The camera perspective from which a criminal confession is videotaped influences later assessments of its voluntariness and the suspect's guilt. Previous research has suggested that this camera perspective bias is rooted in perceptual rather than conceptual processes, but these data are strictly correlational. In 3 experiments, the authors directly manipulated perceptual processing to provide stronger evidence of its mediational role. Prior to viewing a videotape of a simulated confession, participants were shown a photograph of the confessor's apparent victim. Participants in a perceptual interference condition were instructed to visualize the image of the victim in their minds while viewing the videotape; participants in a conceptual interference condition were instructed instead to rehearse an 8-digit number. Because mental imagery and actual perception draw on the same available resources, the authors anticipated that the former, but not the latter, interference task would disrupt the camera perspective bias, if indeed it were perceptually mediated. Results supported this conclusion.

  16. Remote sensing for hurricane Andrew impact assessment

    NASA Technical Reports Server (NTRS)

    Davis, Bruce A.; Schmidt, Nicholas

    1994-01-01

    Stennis Space Center personnel flew a Learjet equipped with instrumentation designed to acquire imagery in many spectral bands into areas most damaged by Hurricane Andrew. The calibrated airborne multispectral scanner (CAMS), a NASA-developed sensor, and a Zeiss camera acquired images of these areas. The information derived from the imagery was used to assist Florida officials in assessing the devastation caused by the hurricane. The imagery provided the relief teams with an assessment of the debris covering roads and highways so cleanup plans could be prioritized. The imagery also mapped the level of damage in residential and commercial areas of southern Florida and provided maps of beaches and land cover for determination of beach loss and vegetation damage, particularly the mangrove population. Stennis Space Center personnel demonstrated the ability to respond quickly and the value of such response in an emergency situation. The digital imagery from the CAMS can be processed, analyzed, and developed into products for field crews faster than conventional photography. The resulting information is versatile and allows for rapid updating and editing. Stennis Space Center and state officials worked diligently to compile information to complete analyses of the hurricane's impact.

  17. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  18. High-resolution streaming video integrated with UGS systems

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew

    2010-04-01

    Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.

  19. Astronaut Photography of the Earth: A Long-Term Dataset for Earth Systems Research, Applications, and Education

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.

    2017-01-01

    The NASA Earth observations dataset obtained by humans in orbit using handheld film and digital cameras is freely accessible to the global community through the online searchable database at https://eol.jsc.nasa.gov, and offers a useful compliment to traditional ground-commanded sensor data. The dataset includes imagery from the NASA Mercury (1961) through present-day International Space Station (ISS) programs, and currently totals over 2.6 million individual frames. Geographic coverage of the dataset includes land and oceans areas between approximately 52 degrees North and South latitudes, but is spatially and temporally discontinuous. The photographic dataset includes some significant impediments for immediate research, applied, and educational use: commercial RGB films and camera systems with overlapping bandpasses; use of different focal length lenses, unconstrained look angles, and variable spacecraft altitudes; and no native geolocation information. Such factors led to this dataset being underutilized by the community but recent advances in automated and semi-automated image geolocation, image feature classification, and web-based services are adding new value to the astronaut-acquired imagery. A coupled ground software and on-orbit hardware system for the ISS is in development for planned deployment in mid-2017; this system will capture camera pose information for each astronaut photograph to allow automated, full georegistration of the data. The ground system component of the system is currently in use to fully georeference imagery collected in response to International Disaster Charter activations, and the auto-registration procedures are being applied to the extensive historical database of imagery to add value for research and educational purposes. In parallel, machine learning techniques are being applied to automate feature identification and classification throughout the dataset, in order to build descriptive metadata that will improve search capabilities. It is expected that these value additions will increase interest and use of the dataset by the global community.

  20. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  1. Photogrammetric Processing of IceBridge DMS Imagery into High-Resolution Digital Surface Models (DEM and Visible Overlay)

    NASA Astrophysics Data System (ADS)

    Arvesen, J. C.; Dotson, R. C.

    2014-12-01

    The DMS (Digital Mapping System) has been a sensor component of all DC-8 and P-3 IceBridge flights since 2009 and has acquired over 3 million JPEG images over Arctic and Antarctic land and sea ice. The DMS imagery is primarily used for identifying and locating open leads for LiDAR sea-ice freeboard measurements and documenting snow and ice surface conditions. The DMS is a COTS Canon SLR camera utilizing a 28mm focal length lens, resulting in a 10cm GSD and swath of ~400 meters from a nominal flight altitude of 500 meters. Exterior orientation is provided by an Applanix IMU/GPS which records a TTL pulse coincident with image acquisition. Notable for virtually all IceBridge flights is that parallel grids are not flown and thus there is no ability to photogrammetrically tie any imagery to adjacent flight lines. Approximately 800,000 Level-3 DMS Surface Model data products have been delivered to NSIDC, each consisting of a Digital Elevation Model (GeoTIFF DEM) and a co-registered Visible Overlay (GeoJPEG). Absolute elevation accuracy for each individual Elevation Model is adjusted to concurrent Airborne Topographic Mapper (ATM) Lidar data, resulting in higher elevation accuracy than can be achieved by photogrammetry alone. The adjustment methodology forces a zero mean difference to the corresponding ATM point cloud integrated over each DMS frame. Statistics are calculated for each DMS Elevation Model frame and show RMS differences are within +/- 10 cm with respect to the ATM point cloud. The DMS Surface Model possesses similar elevation accuracy to the ATM point cloud, but with the following advantages: · Higher and uniform spatial resolution: 40 cm GSD · 45% wider swath: 435 meters vs. 300 meters at 500 meter flight altitude · Visible RGB co-registered overlay at 10 cm GSD · Enhanced visualization through 3-dimensional virtual reality (i.e. video fly-through) Examples will be presented of the utility of these advantages and a novel use of a cell phone camera for aerial photogrammetry will also be presented.

  2. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  3. Transferring Knowledge from a Bird's-Eye View - Earth Observation and Space Travels in Schools

    NASA Astrophysics Data System (ADS)

    Rienow, Andreas; Hodam, Henryk; Menz, Gunter; Voß, Kerstin

    2014-05-01

    In spring 2014, four commercial cameras will be transported by a Dragon spacecraft to the International Space Station (ISS) and mounted to the ESA Columbus laboratory. The cameras will deliver live earth observation data from different angles. The "Columbus-Eye"* project aims at distributing the video and image data produced by those cameras through a web portal. It should primary serve as learning portal for pupils comprising teaching material around the ISS earth observation imagery. The pupils should be motivated to work with the images in order to learn about curriculum relevant topics of natural sciences. The material will be prepared based on the experiences of the FIS* (German abbreviation for "Remote Sensing in Schools") project and its learning portal. Recognizing that in-depth use of satellite imagery can only be achieved by the means of computer aided learning methods, a sizeable number of e-Learning contents in German and English have been created throughout the last 5 years since FIS' kickoff. The talk presents the educational valorization of remote sensing data as well as their interactive implementation for teachers and pupils in both learning portals. It will be shown which possibilities the topic of remote sensing holds ready for teaching the regular curricula of Geography, Biology, Physics, Math and Informatics. Beside the sequenced implementation into digital and interactive teaching units, examples of a richly illustrated encyclopedia as well as easy-to-use image processing tools are given. The presentation finally addresses the question of how synergies of space travels can be used to enhance the fascination of earth observation imagery in the light of problem-based learning in everyday school lessons.

  4. In-situ calibration of nonuniformity in infrared staring and modulated systems

    NASA Astrophysics Data System (ADS)

    Black, Wiley T.

    Infrared cameras can directly measure the apparent temperature of objects, providing thermal imaging. However, the raw output from most infrared cameras suffers from a strong, often limiting noise source called nonuniformity. Manufacturing imperfections in infrared focal planes lead to high pixel-to-pixel sensitivity to electronic bias, focal plane temperature, and other effects. The resulting imagery can only provide useful thermal imaging after a nonuniformity calibration has been performed. Traditionally, these calibrations are performed by momentarily blocking the field of view with a at temperature plate or blackbody cavity. However because the pattern is a coupling of manufactured sensitivities with operational variations, periodic recalibration is required, sometimes on the order of tens of seconds. A class of computational methods called Scene-Based Nonuniformity Correction (SBNUC) has been researched for over 20 years where the nonuniformity calibration is estimated in digital processing by analysis of the video stream in the presence of camera motion. The most sophisticated SBNUC methods can completely and robustly eliminate the high-spatial frequency component of nonuniformity with only an initial reference calibration or potentially no physical calibration. I will demonstrate a novel algorithm that advances these SBNUC techniques to support all spatial frequencies of nonuniformity correction. Long-wave infrared microgrid polarimeters are a class of camera that incorporate a microscale per-pixel wire-grid polarizer directly affixed to each pixel of the focal plane. These cameras have the capability of simultaneously measuring thermal imagery and polarization in a robust integrated package with no moving parts. I will describe the necessary adaptations of my SBNUC method to operate on this class of sensor as well as demonstrate SBNUC performance in LWIR polarimetry video collected on the UA mall.

  5. Integration of orthophotographic and sidescan sonar imagery: an example from Lake Garda, Italy

    USGS Publications Warehouse

    Gentili, Giuseppe; Twichell, David C.; Schwab, Bill

    1996-01-01

    Digital orthophotos of Lake Garda basin area are available at the scale of up to 1:10,000 from a 1994 high altitude (average scale of 1:75,000) air photo coverage of Italy collected with an RC30 camera and Panatomic film. In October 1994 the lake bed was surveyed by USGS and CISIG personnel using a SIS 1000 Sea-Floor Mapping System. Subsystems of the SIS-1000 include high resolution sidescan sonar and sub-bottom profiler. The sidescan imagery was collected in ranges up to 1500m, while preserving a 50cm pixel resolution. The system was navigated using differential GPS. The extended operational range of the sidescan sonar permitted surveying the 370km lake area in 11 days. Data were compiled into a digital image with a pixel resolution of about 2m and stored as 12 gigabytes in exabyte 8mm tape and converted from WGS84 coordinate system to the European Datum (ED50) and integrated with bathymetric data digitized from maps.The digital bathymetric model was generated by interpolation using commercial software and was merged with the land elevation model to obtain a digital elevation model of the Lake Garda basin.The sidescan image data was also projected in the same coordinate system and seamed with the digital orthophoto of the land to produce a continuous image of the basin as if the water were removed. Some perspective scenes were generated by combining elevation and bathymetric data with basin and lake floor images. In deep water the lake's thermal structure created problems with the imagery indicating that winter or spring is best survey period. In shallow waters, ≤ 10 m, where data are missing, the bottom data gap can be filled with available images from the first few channels of the Daedalus built MIVIS, a 102 channel hyperspectral scanner with 20 channel bands of 0.020 μm width, operating in the visible part of the spectrum. By integrating orthophotos with sidescan imagery we can see how the basin morphology extends across the lake, the paths taken by the lake inlet along the lake bed and the areal distribution of sediments. An extensive exposure of debris aprons were noted on the western side of the lake. Various anthropogenic objects were recognized: pipelines, sites of waste disposal on the lake's bed, and relicts of Venitian and Austrian(?) boats.

  6. Motion Imagery and Robotics Application Project (MIRA)

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney P.

    2010-01-01

    This viewgraph presentation describes the Motion Imagery and Robotics Application (MIRA) Project. A detailed description of the MIRA camera service software architecture, encoder features, and on-board communications are presented. A description of a candidate camera under development is also shown.

  7. Stream network analysis from orbital and suborbital imagery, Colorado River Basin, Texas

    NASA Technical Reports Server (NTRS)

    Baker, V. R. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Orbital SL-2 imagery (earth terrain camera S-190B), received September 5, 1973, was subjected to quantitative network analysis and compared to 7.5 minute topographic mapping (scale: 1/24,000) and U.S.D.A. conventional black and white aerial photography (scale: 1/22,200). Results can only be considered suggestive because detail on the SL-2 imagery was badly obscured by heavy cloud cover. The upper Bee Creek basin was chosen for analysis because it appeared in a relatively cloud-free portion of the orbital imagery. Drainage maps were drawn from the three sources digitized into a computer-compatible format, and analyzed by the WATER system computer program. Even at its small scale (1/172,000) and with bad haze the orbital photo showed much drainage detail. The contour-like character of the Glen Rose Formation's resistant limestone units allowed channel definition. The errors in pattern recognition can be attributed to local areas of dense vegetation and to other areas of very high albedo caused by surficial exposure of caliche. The latter effect caused particular difficulty in the determination of drainage divides.

  8. Camera system considerations for geomorphic applications of SfM photogrammetry

    USGS Publications Warehouse

    Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John

    2017-01-01

    The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights reserved.Regression analysis of 67 reviewed datasets revealed that the best explanatory variable to predict accuracy of SfM data is photographic scale. Despite the prevalent use of object distance ratios to describe scale, nominal ground sample distance is shown to be a superior metric, explaining 68% of the variability in mean absolute vertical error.

  9. Public engagement in 3D flood modelling through integrating crowd sourced imagery with UAV photogrammetry to create a 3D flood hydrograph.

    NASA Astrophysics Data System (ADS)

    Bond, C. E.; Howell, J.; Butler, R.

    2016-12-01

    With an increase in flood and storm events affecting infrastructure the role of weather systems, in a changing climate, and their impact is of increasing interest. Here we present a new workflow integrating crowd sourced imagery from the public with UAV photogrammetry to create, the first 3D hydrograph of a major flooding event. On December 30th 2015, Storm Frank resulted in high magnitude rainfall, within the Dee catchment in Aberdeenshire, resulting in the highest ever-recorded river level for the Dee, with significant impact on infrastructure and river morphology. The worst of the flooding occurred during daylight hours and was digitally captured by the public on smart phones and cameras. After the flood event a UAV was used to shoot photogrammetry to create a textured elevation model of the area around Aboyne Bridge on the River Dee. A media campaign aided crowd sourced digital imagery from the public, resulting in over 1,000 images submitted by the public. EXIF data captured by the imagery of the time, date were used to sort the images into a time series. Markers such as signs, walls, fences and roads within the images were used to determine river level height through the flood, and matched onto the elevation model to contour the change in river level. The resulting 3D hydrograph shows the build up of water on the up-stream side of the Bridge that resulted in significant scouring and under-mining in the flood. We have created the first known data based 3D hydrograph for a river section, from a UAV photogrammetric model and crowd sourced imagery. For future flood warning and infrastructure management a solution that allows a realtime hydrograph to be created utilising augmented reality to integrate the river level information in crowd sourced imagery directly onto a 3D model, would significantly improve management planning and infrastructure resilience assessment.

  10. Using digital photogrammetry to constrain the segmentation of Paleocene volcanic marker horizons within the Nuussuaq basin

    NASA Astrophysics Data System (ADS)

    Vest Sørensen, Erik; Pedersen, Asger Ken

    2017-04-01

    Digital photogrammetry is used to map important volcanic marker horizons within the Nuussuaq Basin, West Greenland. We use a combination of oblique stereo images acquired from helicopter using handheld cameras and traditional aerial photographs. The oblique imagery consists of scanned stereo photographs acquired with analogue cameras in the 90´ties and newer digital images acquired with high resolution digital consumer cameras. Photogrammetric software packages SOCET SET and 3D Stereo Blend are used for controlling the seamless movement between stereo-models at different scales and viewing angles and the mapping is done stereoscopically using 3d monitors and the human stereopsis. The approach allows us to map in three dimensions three characteristic marker horizons (Tunoqqu, Kûgánguaq and Qordlortorssuaq Members) within the picritic Vaigat Formation. They formed toward the end of the same volcanic episode and are believed to be closely related in time. They formed an approximately coherent sub-horizontal surface, the Tunoqqu Surface that at the time of formation covered more than 3100 km2 on Disko and Nuussuaq. Our mapping shows that the Tunoqqu Surface is now segmented into areas of different elevation and structural trend as a result of later tectonic deformation. This is most notable on Nuussuaq where the western part is elevated and in parts highly faulted. In western Nuussuaq the surface has been uplifted and faulted so that it now forms an asymmetric anticline. The flanks of the anticline are coincident with two N-S oriented pre-Tunoqqu extensional faults. The deformation of the Tunoqqu surface could be explained by inversion of older extensional faults due to an overall E-W directed compressive regime in the late Paleocene.

  11. The PRo3D View Planner - interactive simulation of Mars rover camera views to optimise capturing parameters

    NASA Astrophysics Data System (ADS)

    Traxler, Christoph; Ortner, Thomas; Hesina, Gerd; Barnes, Robert; Gupta, Sanjeev; Paar, Gerhard

    2017-04-01

    High resolution Digital Terrain Models (DTM) and Digital Outcrop Models (DOM) are highly useful for geological analysis and mission planning in planetary rover missions. PRo3D, developed as part of the EU-FP7 PRoViDE project, is a 3D viewer in which orbital DTMs and DOMs derived from rover stereo imagery can be rendered in a virtual environment for exploration and analysis. It allows fluent navigation over planetary surface models and provides a variety of measurement and annotation tools to complete an extensive geological interpretation. A key aspect of the image collection during planetary rover missions is determining the optimal viewing positions of rover instruments from different positions ('wide baseline stereo'). For the collection of high quality panoramas and stereo imagery the visibility of regions of interest from those positions, and the amount of common features shared by each stereo-pair, or image bundle is crucial. The creation of a highly accurate and reliable 3D surface, in the form of an Ordered Point Cloud (OPC), of the planetary surface, with a low rate of error and a minimum of artefacts, is greatly enhanced by using images that share a high amount of features and a sufficient overlap for wide baseline stereo or target selection. To support users in the selection of adequate viewpoints an interactive View Planner was integrated into PRo3D. The users choose from a set of different rovers and their respective instruments. PRo3D supports for instance the PanCam instrument of ESA's ExoMars 2020 rover mission or the Mastcam-Z camera of NASA's Mars2020 mission. The View Planner uses a DTM obtained from orbiter imagery, which can also be complemented with rover-derived DOMs as the mission progresses. The selected rover is placed onto a position on the terrain - interactively or using the current rover pose as known from the mission. The rover's base polygon and its local coordinate axes, and the chosen instrument's up- and forward vectors are visualised. The parameters of the instrument's pan and tilt unit (PTU) can be altered via the user interface, or alternatively calculated by selecting a target point on the visualised DTM. In the 3D view, the visible region of the planetary surface, resulting from these settings and the camera field-of-view is visualised by a highlighted region with a red border, representing the instruments footprint. The camera view is simulated and rendered in a separate window and PTU parameters can be interactively adjusted, allowing viewpoints, directions, and the expected image to be visualised in real-time in order to allow users the fine-tuning of these settings. In this way, ideal viewpoints and PTU settings for various rover models and instruments can efficiently be defined, resulting in an optimum imagery of the regions of interest.

  12. Fusion of UAV photogrammetry and digital optical granulometry for detection of structural changes in floodplains

    NASA Astrophysics Data System (ADS)

    Langhammer, Jakub; Lendzioch, Theodora; Mirijovsky, Jakub

    2016-04-01

    Granulometric analysis represents a traditional, important and for the description of sedimentary material substantial method with various applications in sedimentology, hydrology and geomorphology. However, the conventional granulometric field survey methods are time consuming, laborious, costly and are invasive to the surface being sampled, which can be limiting factor for their applicability in protected areas.. The optical granulometry has recently emerged as an image analysis technique, enabling non-invasive survey, employing semi-automated identification of clasts from calibrated digital imagery, taken on site by conventional high resolution digital camera and calibrated frame. The image processing allows detection and measurement of mixed size natural grains, their sorting and quantitative analysis using standard granulometric approaches. Despite known limitations, the technique today presents reliable tool, significantly easing and speeding the field survey in fluvial geomorphology. However, the nature of such survey has still limitations in spatial coverage of the sites and applicability in research at multitemporal scale. In our study, we are presenting novel approach, based on fusion of two image analysis techniques - optical granulometry and UAV-based photogrammetry, allowing to bridge the gap between the needs of high resolution structural information for granulometric analysis and spatially accurate and data coverage. We have developed and tested a workflow that, using UAV imaging platform enabling to deliver seamless, high resolution and spatially accurate imagery of the study site from which can be derived the granulometric properties of the sedimentary material. We have set up a workflow modeling chain, providing (i) the optimum flight parameters for UAV imagery to balance the two key divergent requirements - imagery resolution and seamless spatial coverage, (ii) the workflow for the processing of UAV acquired imagery by means of the optical granulometry and (iii) the workflow for analysis of spatial distribution and temporal changes of granulometric properties across the point bar. The proposed technique was tested on a case study of an active point bar of mid-latitude mountain stream at Sumava mountains, Czech Republic, exposed to repeated flooding. The UAV photogrammetry was used to acquire very high resolution imagery to build high-precision digital terrain models and orthoimage. The orthoimage was then analyzed using the digital optical granulometric tool BaseGrain. This approach allowed us (i) to analyze the spatial distribution of the grain size in a seamless transects over an active point bar and (ii) to assess the multitemporal changes of granulometric properties of the point bar material resulting from flooding. The tested framework prove the applicability of the proposed method for granulometric analysis with accuracy comparable with field optical granulometry. The seamless nature of the data enables to study spatial distribution of granulometric properties across the study sites as well as the analysis of multitemporal changes, resulting from repeated imaging.

  13. Potential and limitations of using digital repeat photography to track structural and physiological phenology in Mediterranean tree-grass ecosystems

    NASA Astrophysics Data System (ADS)

    Luo, Yunpeng; EI-Madany, Tarek; Filippa, Gianluca; Carrara, Arnaud; Cremonese, Edoardo; Galvagno, Marta; Hammer, Tiana; Pérez-Priego, Oscar; Reichstein, Markus; Martín Isabel, Pilar; González Cascón, Rosario; Migliavacca, Mirco

    2017-04-01

    Tree-Grass ecosystems are global widely distributed (16-35% of the land surface). However, its phenology (especially in water-limited areas) has not yet been well characterized and modeled. By using commercial digital cameras, continuous and relatively vast phenology data becomes available, which provides a good opportunity to monitor and develop a robust method used to extract the important phenological events (phenophases). Here we aimed to assess the usability of digital repeat photography for three Tree-Grass Mediterranean ecosystems over two different growing seasons (Majadas del Tietar, Spain) to extract critical phenophases for grass and evergreen broadleaved trees (autumn regreening of grass- Start of growing season; resprouting of tree leaves; senescence of grass - End of growing season), assess their uncertainty, and to correlate them with physiological phenology (i.e. phenology of ecosystem scale fluxes such as Gross Primary Productivity, GPP). We extracted green chromatic coordinates (GCC) and camera based normalized difference vegetation index (Camera-NDVI) from an infrared enabled digital camera using the "Phenopix" R package. Then we developed a novel method to retrieve important phenophases from GCC and Camera-NDVI from various region of interests (ROIs) of the imagery (tree areas, grass, and both - ecosystem) as well as from GPP, which was derived from Eddy Covariance tower in the same experimental site. The results show that, at ecosystem level, phenophases derived from GCC and Camera-NDVI are strongly correlated (R2 = 0.979). Remarkably, we observed that at the end of growing season phenophases derived from GCC were systematically advanced (ca. 8 days) than phenophase from Camera-NDVI. By using the radiative transfer model Soil Canopy Observation Photochemistry and Energy (SCOPE) we demonstrated that this delay is related to the different sensitivity of GCC and NDVI to the fraction of green/dry grass in the canopy, resulting in a systematic higher NDVI during the dry-down of the canopy. Phenophases derived from GCC and Camera-NDVI are correlated with phenophase extracted from GPP across sites and years (R2 =0.966 and 0.976 respectively). For the start of growing season the determination coefficient was higher (R2 =0.89 and 0.98 for GCC vs GPP and Camera-NDVI vs GPP, respectively) than for the end of growing season (R2 =0.75 and 0.70, for GCC and Camera-NDVI, respectively). The statistics obtained using phenophases derived from grass or ecosystem ROI are similar. In contrast, GCC and Camera-NDVI derived from trees ROI are relatively constant and not related to the seasonality of GPP. However, the GCC of tree shows a characteristic peak that is synchronous to leaf flushing in spring assessed using regular Chlorophyll content measurements and automatic dendrometers. Concluding, we first developed a method to derive phenological events of Tree-Grass ecosystems using digital repeat photography, second we demonstrated that the phenology of GPP is strongly dominated by the phenology of grassland layer, third we discussed the uncertainty related to the use of GCC and Camera-NDVI in senescence, and finally we demonstrate the capability of GCC to track in evergreen broadleaved forest crucial phenological events. Our findings confirm digital repeat photography is a vital data source for characterizing phenology in Mediterranean Tree-Grass Ecosystem.

  14. Assessment of Photogrammetry Structure-from-Motion Compared to Terrestrial LiDAR Scanning for Generating Digital Elevation Models. Application to the Austre Lovéenbreen Polar Glacier Basin, Spitsbergen 79°N

    NASA Astrophysics Data System (ADS)

    Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.

    2014-12-01

    Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.

  15. A new towed platform for the unobtrusive surveying of benthic habitats and organisms

    USGS Publications Warehouse

    Zawada, David G.; Thompson, P.R.; Butcher, J.

    2008-01-01

    Maps of coral ecosystems are needed to support many conservation and management objectives, as well as research activities. Examples include ground-truthing aerial and satellite imagery, characterizing essential habitat, assessing changes, and monitoring the progress of restoration efforts. To address some of these needs, the U.S. Geological Survey developed the Along-Track Reef-Imaging System (ATRIS), a boat-based sensor package for mapping shallow-water benthic environments. ATRIS consists of a digital still camera, a video camera, and an acoustic depth sounder affixed to a moveable pole. This design, however, restricts its deployment to clear waters less than 10 m deep. To overcome this limitation, a towed version has been developed, referred to as Deep ATRIS. The system is based on a light-weight, computer-controlled, towed vehicle that is capable of following a programmed diving profile. The vehicle is 1.3 m long with a 63-cm wing span and can carry a wide variety of research instruments, including CTDs, fluorometers, transmissometers, and cameras. Deep ATRIS is currently equipped with a high-speed (20 frames · s-1) digital camera, custom-built light-emitting-diode lights, a compass, a 3-axis orientation sensor, and a nadir-looking altimeter. The vehicle dynamically adjusts its altitude to maintain a fixed height above the seafloor. The camera has a 29° x 22° field-of-view and captures color images that are 1360 x 1024 pixels in size. GPS coordinates are recorded for each image. A gigabit ethernet connection enables the images to be displayed and archived in real time on the surface computer. Deep ATRIS has a maximum tow speed of 2.6 m · s-1and a theoretical operating tow-depth limit of 27 m. With an improved tow cable, the operating depth can be extended to 90 m. Here, we present results from the initial sea trials in the Gulf of Mexico and Biscayne National Park, Florida, USA, and discuss the utility of Deep ATRIS for map-ping coral reef habitats. Several example mosaics illustrate the high-quality imagery that can be obtained with this system. The images also reveal the potential for unobtrusive animal observations; fish and sea turtles are unperturbed by the presence of Deep ATRIS

  16. International Space Station Data Collection for Disaster Response

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.; Evans, Cynthia A.

    2015-01-01

    Remotely sensed data acquired by orbital sensor systems has emerged as a vital tool to identify the extent of damage resulting from a natural disaster, as well as providing near-real time mapping support to response efforts on the ground and humanitarian aid efforts. The International Space Station (ISS) is a unique terrestrial remote sensing platform for acquiring disaster response imagery. Unlike automated remote-sensing platforms it has a human crew; is equipped with both internal and externally-mounted remote sensing instruments; and has an inclined, low-Earth orbit that provides variable views and lighting (day and night) over 95 percent of the inhabited surface of the Earth. As such, it provides a useful complement to autonomous sensor systems in higher altitude polar orbits. NASA remote sensing assets on the station began collecting International Disaster Charter (IDC) response data in May 2012. The initial NASA ISS sensor systems responding to IDC activations included the ISS Agricultural Camera (ISSAC), mounted in the Window Observational Research Facility (WORF); the Crew Earth Observations (CEO) Facility, where the crew collects imagery using off-the-shelf handheld digital cameras; and the Hyperspectral Imager for the Coastal Ocean (HICO), a visible to near-infrared system mounted externally on the Japan Experiment Module Exposed Facility. The ISSAC completed its primary mission in January 2013. It was replaced by the very high resolution ISS SERVIR Environmental Research and Visualization System (ISERV) Pathfinder, a visible-wavelength digital camera, telescope, and pointing system. Since the start of IDC response in 2012 there have been 108 IDC activations; NASA sensor systems have collected data for thirty-two of these events. Of the successful data collections, eight involved two or more ISS sensor systems responding to the same event. Data has also been collected by International Partners in response to natural disasters, most notably JAXA and Roscosmos/Energia through the Urugan program.

  17. Reliability and Validity of Digital Imagery Methodology for Measuring Starting Portions and Plate Waste from School Salad Bars.

    PubMed

    Bean, Melanie K; Raynor, Hollie A; Thornton, Laura M; Sova, Alexandra; Dunne Stewart, Mary; Mazzeo, Suzanne E

    2018-04-12

    Scientifically sound methods for investigating dietary consumption patterns from self-serve salad bars are needed to inform school policies and programs. To examine the reliability and validity of digital imagery for determining starting portions and plate waste of self-serve salad bar vegetables (which have variable starting portions) compared with manual weights. In a laboratory setting, 30 mock salads with 73 vegetables were made, and consumption was simulated. Each component (initial and removed portion) was weighed; photographs of weighed reference portions and pre- and post-consumption mock salads were taken. Seven trained independent raters visually assessed images to estimate starting portions to the nearest ¼ cup and percentage consumed in 20% increments. These values were converted to grams for comparison with weighed values. Intraclass correlations between weighed and digital imagery-assessed portions and plate waste were used to assess interrater reliability and validity. Pearson's correlations between weights and digital imagery assessments were also examined. Paired samples t tests were used to evaluate mean differences (in grams) between digital imagery-assessed portions and measured weights. Interrater reliabilities were excellent for starting portions and plate waste with digital imagery. For accuracy, intraclass correlations were moderate, with lower accuracy for determining starting portions of leafy greens compared with other vegetables. However, accuracy of digital imagery-assessed plate waste was excellent. Digital imagery assessments were not significantly different from measured weights for estimating overall vegetable starting portions or waste; however, digital imagery assessments slightly underestimated starting portions (by 3.5 g) and waste (by 2.1 g) of leafy greens. This investigation provides preliminary support for use of digital imagery in estimating starting portions and plate waste from school salad bars. Results might inform methods used in empirical investigations of dietary intake in schools with self-serve salad bars. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  18. Laying the foundation to use Raspberry Pi 3 V2 camera module imagery for scientific and engineering purposes

    NASA Astrophysics Data System (ADS)

    Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James

    2017-01-01

    A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.

  19. Digital coding of Shuttle TV

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Batson, B.

    1976-01-01

    Space Shuttle will be using a field-sequential color television system for the first few missions, but the present plans are to switch to a NTSC color TV system for future missions. The field-sequential color TV system uses a modified black and white camera, producing a TV signal with a digital bandwidth of about 60 Mbps. This article discusses the characteristics of the Shuttle TV systems and proposes a bandwidth-compression technique for the field-sequential color TV system that could operate at 13 Mbps to produce a high-fidelity signal. The proposed bandwidth-compression technique is based on a two-dimensional DPCM system that utilizes temporal, spectral, and spatial correlation inherent in the field-sequential color TV imagery. The proposed system requires about 60 watts and less than 200 integrated circuits.

  20. Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images

    NASA Technical Reports Server (NTRS)

    Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.

    2011-01-01

    A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.

  1. Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization

    NASA Technical Reports Server (NTRS)

    Beaulieu, K.

    2014-01-01

    Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.

  2. HVS: an image-based approach for constructing virtual environments

    NASA Astrophysics Data System (ADS)

    Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao

    1998-09-01

    Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.

  3. ProShare teleconferencing with KIDSAT participants

    NASA Image and Video Library

    1997-02-27

    STS081-378-012 (12-22 January 1997) --- Astronaut Marsha S. Ivins, mission specialist, looks at digital still photo imagery on a lap top computer on the Space Shuttle Atlantis' aft flight deck while communicating with students on Earth. Her activity is all part of the once-a-year shuttle participation in an educational endeavor called KidSat. The KidSat project allows students the opportunity to interact with the astronauts' real-time observations and photography of geographic points of interest. The Electronic Still Camera (ESC), which was handled largely by Ivins, can be seen near the computer.

  4. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  5. Aerial image databases for pipeline rights-of-way management

    NASA Astrophysics Data System (ADS)

    Jadkowski, Mark A.

    1996-03-01

    Pipeline companies that own and manage extensive rights-of-way corridors are faced with ever-increasing regulatory pressures, operating issues, and the need to remain competitive in today's marketplace. Automation has long been an answer to the problem of having to do more work with less people, and Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) solutions have been implemented at several pipeline companies. Until recently, the ability to cost-effectively acquire and incorporate up-to-date aerial imagery into these computerized systems has been out of the reach of most users. NASA's Earth Observations Commercial Applications Program (EOCAP) is providing a means by which pipeline companies can bridge this gap. The EOCAP project described in this paper includes a unique partnership with NASA and James W. Sewall Company to develop an aircraft-mounted digital camera system and a ground-based computer system to geometrically correct and efficiently store and handle the digital aerial images in an AM/FM/GIS environment. This paper provides a synopsis of the project, including details on (1) the need for aerial imagery, (2) NASA's interest and role in the project, (3) the design of a Digital Aerial Rights-of-Way Monitoring System, (4) image georeferencing strategies for pipeline applications, and (5) commercialization of the EOCAP technology through a prototype project at Algonquin Gas Transmission Company which operates major gas pipelines in New England, New York, and New Jersey.

  6. The Utility of Using a Near-Infrared (NIR) Camera to Measure Beach Surface Moisture

    NASA Astrophysics Data System (ADS)

    Nelson, S.; Schmutz, P. P.

    2017-12-01

    Surface moisture content is an important factor that must be considered when studying aeolian sediment transport in a beach environment. A few different instruments and procedures are available for measuring surface moisture content (i.e. moisture probes, LiDAR, and gravimetric moisture data from surface scrapings); however, these methods can be inaccurate, costly, and inapplicable, particularly in the field. Near-infrared (NIR) spectral band imagery is another technique used to obtain moisture data. NIR imagery has been predominately used through remote sensing and has yet to be used for ground-based measurements. Dry sand reflects infrared radiation given off by the sun and wet sand absorbs IR radiation. All things considered, this study assesses the utility of measuring surface moisture content of beach sand with a modified NIR camera. A traditional point and shoot digital camera was internally modified with the placement of a visible light-blocking filter. Images were taken of three different types of beach sand at controlled moisture content values, with sunlight as the source of infrared radiation. A technique was established through trial and error by comparing resultant histogram values using Adobe Photoshop with the various moisture conditions. The resultant IR absorption histogram values were calibrated to actual gravimetric moisture content from surface scrapings of the samples. Overall, the results illustrate that the NIR spectrum modified camera does not provide the ability to adequately measure beach surface moisture content. However, there were noted differences in IR absorption histogram values among the different sediment types. Sediment with darker quartz mineralogy provided larger variations in histogram values, but the technique is not sensitive enough to accurately represent low moisture percentages, which are of most importance when studying aeolian sediment transport.

  7. Integration of aerial oblique imagery and terrestrial imagery for optimized 3D modeling in urban areas

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Xie, Linfu; Hu, Han; Zhu, Qing; Yau, Eric

    2018-05-01

    Photorealistic three-dimensional (3D) models are fundamental to the spatial data infrastructure of a digital city, and have numerous potential applications in areas such as urban planning, urban management, urban monitoring, and urban environmental studies. Recent developments in aerial oblique photogrammetry based on aircraft or unmanned aerial vehicles (UAVs) offer promising techniques for 3D modeling. However, 3D models generated from aerial oblique imagery in urban areas with densely distributed high-rise buildings may show geometric defects and blurred textures, especially on building façades, due to problems such as occlusion and large camera tilt angles. Meanwhile, mobile mapping systems (MMSs) can capture terrestrial images of close-range objects from a complementary view on the ground at a high level of detail, but do not offer full coverage. The integration of aerial oblique imagery with terrestrial imagery offers promising opportunities to optimize 3D modeling in urban areas. This paper presents a novel method of integrating these two image types through automatic feature matching and combined bundle adjustment between them, and based on the integrated results to optimize the geometry and texture of the 3D models generated from aerial oblique imagery. Experimental analyses were conducted on two datasets of aerial and terrestrial images collected in Dortmund, Germany and in Hong Kong. The results indicate that the proposed approach effectively integrates images from the two platforms and thereby improves 3D modeling in urban areas.

  8. Choosing a DIVA: a comparison of emerging digital imagery vegetation analysis techniques

    USGS Publications Warehouse

    Jorgensen, Christopher F.; Stutzman, Ryan J.; Anderson, Lars C.; Decker, Suzanne E.; Powell, Larkin A.; Schacht, Walter H.; Fontaine, Joseph J.

    2013-01-01

    Question: What is the precision of five methods of measuring vegetation structure using ground-based digital imagery and processing techniques? Location: Lincoln, Nebraska, USA Methods: Vertical herbaceous cover was recorded using digital imagery techniques at two distinct locations in a mixed-grass prairie. The precision of five ground-based digital imagery vegetation analysis (DIVA) methods for measuring vegetation structure was tested using a split-split plot analysis of covariance. Variability within each DIVA technique was estimated using coefficient of variation of mean percentage cover. Results: Vertical herbaceous cover estimates differed among DIVA techniques. Additionally, environmental conditions affected the vertical vegetation obstruction estimates for certain digital imagery methods, while other techniques were more adept at handling various conditions. Overall, percentage vegetation cover values differed among techniques, but the precision of four of the five techniques was consistently high. Conclusions: DIVA procedures are sufficient for measuring various heights and densities of standing herbaceous cover. Moreover, digital imagery techniques can reduce measurement error associated with multiple observers' standing herbaceous cover estimates, allowing greater opportunity to detect patterns associated with vegetation structure.

  9. The Ship Tethered Aerostat Remote Sensing System (STARRS): Observations of Small-Scale Surface Lateral Transport During the LAgrangian Submesoscale ExpeRiment (LASER)

    NASA Astrophysics Data System (ADS)

    Carlson, D. F.; Novelli, G.; Guigand, C.; Özgökmen, T.; Fox-Kemper, B.; Molemaker, M. J.

    2016-02-01

    The Consortium for Advanced Research on the Transport of Hydrocarbon in the Environment (CARTHE) will carry out the LAgrangian Submesoscale ExpeRiment (LASER) to study the role of small-scale processes in the transport and dispersion of oil and passive tracers. The Ship-Tethered Aerostat Remote Sensing System (STARRS) will observe small-scale surface dispersion in the open ocean. STARRS is built around a high-lift-capacity (30 kg) helium-filled aerostat. STARRS is equipped with a high resolution digital camera. An integrated GNSS receiver and inertial navigation system permit direct geo-rectification of the imagery. Consortium for Advanced Research on the Transport of Hydrocarbon in the Environment (CARTHE) will carry out the LAgrangian Submesoscale ExpeRiment (LASER) to study the role of small-scale processes in the transport and dispersion of oil and passive tracers. The Ship-Tethered Aerostat Remote Sensing System (STARRS) was developed to produce observational estimates of small-scale surface dispersion in the open ocean. STARRS is built around a high-lift-capacity (30 kg) helium-filled aerostat. STARRS is equipped with a high resolution digital camera. An integrated GNSS receiver and inertial navigation system permit direct geo-rectification of the imagery. Thousands of drift cards deployed in the field of view of STARRS and tracked over time provide the first observational estimates of small-scale (1-500 m) surface dispersion in the open ocean. The STARRS imagery will be combined with GPS-tracked surface drifter trajectories, shipboard observations, and aerial surveys of sea surface temperature in the DeSoto Canyon. In addition to obvious applications to oil spill modelling, the STARRS observations will provide essential benchmarks for high resolution numerical modelsDrift cards deployed in the field of view of STARRS and tracked over time provide the first observational estimates of small-scale (1-100 m) surface dispersion in the open ocean. The STARRS imagery will be combined with GPS-tracked surface drifter trajectories, shipboard observations, and aerial surveys of sea surface temperature in the DeSoto Canyon. In addition to obvious applications to oil spill modelling, the STARRS observations will provide essential benchmarks for high resolution numerical models

  10. High Resolution Photogrammetric Digital Elevation Models Across Calving Fronts and Meltwater Channels in Greenland

    NASA Astrophysics Data System (ADS)

    Le Bel, D. A.; Brown, S.; Zappa, C. J.; Bell, R. E.; Frearson, N.; Tinto, K. J.

    2014-12-01

    Photogrammetric digital elevation models (DEMs) are a powerful approach for understanding elevation change and dynamics along the margins of the large ice sheets. The IcePod system, mounted on a New York Air National Guard LC-130, can measure high-resolution surface elevations with a Riegl VQ580 scanning laser altimeter and Imperx Bobcat IGV-B6620 color visible-wavelength camera (6600x4400 resolution); the surface temperature with a Sofradir IRE-640L infrared camera (spectral response 7.7-9.5 μm, 640x512 resolution); and the structure of snow and ice with two radar systems. We show the use of IcePod imagery to develop DEMs across calving fronts and meltwater channels in Greenland. Multiple over-flights of the Kangerlussaq Airport ramp have provided a test of the technique at a location with accurate, independently-determined elevation. Here the photogrammetric DEM of the airport, constrained by ground control measurements, is compared with the Lidar results. In July 2014 the IcePod ice-ocean imaging system surveyed the calving fronts of five outlet glaciers north of Jakobshavn Isbrae. We used Agisoft PhotoScan to develop a DEM of each calving front using imagery captured by the IcePod systems. Adjacent to the ice sheet, meltwater plumes foster mixing in the fjord, moving warm ocean water into contact with the front of the ice sheet where it can undercut the ice front and trigger calving. The five glaciers provide an opportunity to examine the calving front structure in relation to ocean temperature, fjord circulation, and spatial scale of the meltwater plumes. The combination of the accurate DEM of the calving front and the thermal imagery used to constrain the temperature and dynamics of the adjacent plume provides new insights into the ice-ocean interactions. Ice sheet margins provide insights into the connections between the surface meltwater and the fate of the water at the ice sheet base. Surface meltwater channels are visualized here for the first time using the combination of Lidar, photogrammetry DEMs and infrared imagery. These techniques leverage electromagnetic surface properties that allow us to identify the presence of water, measure the slope and elevation of the channel, as well as the two-dimensional temperature variability of the water/ice/snow in multiple melt channels within a drainage system.

  11. Digital Imagery Compression Best Practices Guide - A Motion Imagery Standards Profile (MISP) Compliant Architecture

    DTIC Science & Technology

    2012-06-01

    MISP) COMPLIANT ARCHITECTURE WHITE SANDS MISSILE RANGE REAGAN TEST SITE YUMA PROVING GROUND DUGWAY PROVING GROUND ABERDEEN TEST CENTER...DIGITAL MOTION IMAGERY COMPRESSION BEST PRACTICES GUIDE – A MOTION IMAGERY STANDARDS PROFILE (MISP) COMPLIANT ARCHITECTURE ...delivery, and archival purposes. These practices are based on a Motion Imagery Standards Profile (MISP) compliant architecture , which has been defined

  12. High spatio-temporal resolution observations of crater-lake temperatures at Kawah Ijen volcano, East Java, Indonesia

    USGS Publications Warehouse

    Lewicki, Jennifer L.; Corentin Caudron,; Vincent van Hinsberg,; George Hilley,

    2016-01-01

    The crater lake of Kawah Ijen volcano, East Java, Indonesia, has displayed large and rapid changes in temperature at point locations during periods of unrest, but measurement techniques employed to-date have not resolved how the lake’s thermal regime has evolved over both space and time. We applied a novel approach for mapping and monitoring variations in crater-lake apparent surface (“skin”) temperatures at high spatial (~32 cm) and temporal (every two minutes) resolution at Kawah Ijen on 18 September 2014. We used a ground-based FLIR T650sc camera with digital and thermal infrared (TIR) sensors from the crater rim to collect (1) a set of visible imagery around the crater during the daytime and (2) a time series of co-located visible and TIR imagery at one location from pre-dawn to daytime. We processed daytime visible imagery with the Structure-from-Motion photogrammetric method to create a digital elevation model onto which the time series of TIR imagery was orthorectified and georeferenced. Lake apparent skin temperatures typically ranged from ~21 to 33oC. At two locations, apparent skin temperatures were ~ 4 and 7 oC less than in-situ lake temperature measurements at 1.5 and 5 m depth, respectively. These differences, as well as the large spatio-temporal variations observed in skin temperatures, were likely largely associated with atmospheric effects such as evaporative cooling of the lake surface and infrared absorption by water vapor and SO2. Calculations based on orthorectified TIR imagery thus yielded underestimates of volcanic heat fluxes into the lake, whereas volcanic heat fluxes estimated based on in-situ temperature measurements (68 to 111 MW) were likely more representative of Kawah Ijen in a quiescent state. The ground-based imaging technique should provide a valuable tool to continuously monitor crater-lake temperatures and contribute insight into the spatio-temporal evolution of these temperatures associated with volcanic activity.

  13. Imaging_Earth_With_MUSES

    NASA Image and Video Library

    2017-07-11

    Commercial businesses and scientific researchers have a new capability to capture digital imagery of Earth, thanks to MUSES: the Multiple User System for Earth Sensing facility. This platform on the outside of the International Space Station is capable of holding four different payloads, ranging from high-resolution digital cameras to hyperspectral imagers, which will support Earth science observations in agricultural awareness, air quality, disaster response, fire detection, and many other research topics. MUSES program manager Mike Soutullo explains the system and its unique features including the ability to change and upgrade payloads using the space station’s Canadarm2 and Special Purpose Dexterous Manipulator. For more information about MUSES, please visit: https://www.nasa.gov/mission_pages/station/research/news/MUSES For more on ISS science, https://www.nasa.gov/mission_pages/station/research/index.html or follow us on Twitter @ISS_research

  14. Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline

    NASA Technical Reports Server (NTRS)

    Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor

    2010-01-01

    Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.

  15. A continuous hyperspatial monitoring system of evapotranspiration and gross primary productivity from Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Wang, Sheng; Bandini, Filippo; Jakobsen, Jakob; Zarco-Tejada, Pablo J.; Köppl, Christian Josef; Haugård Olesen, Daniel; Ibrom, Andreas; Bauer-Gottwein, Peter; Garcia, Monica

    2017-04-01

    Unmanned Aerial Systems (UAS) can collect optical and thermal hyperspatial (<1m) imagery with low cost and flexible revisit times regardless of cloudy conditions. The reflectance and radiometric temperature signatures of the land surface, closely linked with the vegetation structure and functioning, are already part of models to predict Evapotranspiration (ET) and Gross Primary Productivity (GPP) from satellites. However, there remain challenges for an operational monitoring using UAS compared to satellites: the payload capacity of most commercial UAS is less than 2 kg, but miniaturized sensors have low signal to noise ratios and small field of view requires mosaicking hundreds of images and accurate orthorectification. In addition, wind gusts and lower platform stability require appropriate geometric and radiometric corrections. Finally, modeling fluxes on days without images is still an issue for both satellite and UAS applications. This study focuses on designing an operational UAS-based monitoring system including payload design, sensor calibration, based on routine collection of optical and thermal images in a Danish willow field to perform a joint monitoring of ET and GPP dynamics over continuous time at daily time steps. The payload (<2 kg) consists of a multispectral camera (Tetra Mini-MCA6), a thermal infrared camera (FLIR Tau 2), a digital camera (Sony RX-100) used to retrieve accurate digital elevation models (DEMs) for multispectral and thermal image orthorectification, and a standard GNSS single frequency receiver (UBlox) or a real time kinematic double frequency system (Novatel Inc. flexpack6+OEM628). Geometric calibration of the digital and multispectral cameras was conducted to recover intrinsic camera parameters. After geometric calibration, accurate DEMs with vertical errors about 10cm could be retrieved. Radiometric calibration for the multispectral camera was conducted with an integrating sphere (Labsphere CSTM-USS-2000C) and the laboratory calibration showed that the camera measured radiance had a bias within ±4.8%. The thermal camera was calibrated using a black body at varying target and ambient temperatures and resulted in laboratory accuracy with RMSE of 0.95 K. A joint model of ET and GPP was applied using two parsimonious, physiologically based models, a modified version of the Priestley-Taylor Jet Propulsion Laboratory model (Fisher et al., 2008; Garcia et al., 2013) and a Light Use Efficiency approach (Potter et al., 1993). Both models estimate ET and GPP under optimum potential conditions down-regulated by the same biophysical constraints dependent on remote sensing and atmospheric data to reflect multiple stresses. Vegetation indices were calculated from the multispectral data to assess vegetation conditions, while thermal infrared imagery was used to compute a thermal inertia index to infer soil moisture constraints. To interpolate radiometric temperature between flights, a prognostic Surface Energy Balance model (Margulis et al., 2001) based on the force-restore method was applied in a data assimilation scheme to obtain continuous ET and GPP fluxes. With this operational system, regular flight campaigns with a hexacopter (DJI S900) have been conducted in a Danish willow flux site (Risø) over the 2016 growing season. The observed energy, water and carbon fluxes from the Risø eddy covariance flux tower were used to validate the model simulation. This UAS monitoring system is suitable for agricultural management and land-atmosphere interaction studies.

  16. High speed spectral measurements of IED detonation fireballs

    NASA Astrophysics Data System (ADS)

    Gordon, J. Motos; Spidell, Matthew T.; Pitz, Jeremey; Gross, Kevin C.; Perram, Glen P.

    2010-04-01

    Several homemade explosives (HMEs) were manufactured and detonated at a desert test facility. Visible and infrared signatures were collected using two Fourier transformspectrometers, two thermal imaging cameras, a radiometer, and a commercial digital video camera. Spectral emissions from the post-detonation combustion fireball were dominated by continuum radiation. The events were short-lived, decaying in total intensity by an order of magnitude within approximately 300ms after detonation. The HME detonation produced a dust cloud in the immediate area that surrounded and attenuated the emitted radiation from the fireball. Visible imagery revealed a dark particulate (soot) cloud within the larger surrounding dust cloud. The ejected dust clouds attenuated much of the radiation from the post-detonation combustion fireballs, thereby reducing the signal-to-noise ratio. The poor SNR at later times made it difficult to detect selective radiation from by-product gases on the time scale (~500ms) in which they have been observed in other HME detonations.

  17. SPRUCE Vegetation Phenology in Experimental Plots from Phenocam Imagery, 2015-2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, Andrew D.; Hufkens, Koen; Milliman, Thomas

    This data set consists of PhenoCam data from the SPRUCE experiment from the beginning of whole ecosystem warming in August 2015 through the end of 2017. Digital cameras, or phenocams, installed in each SPRUCE enclosure track seasonal variation in vegetation “greenness”, a proxy for vegetation phenology and associated physiological activity. Regions of interest (ROIs) were defined for vegetation types (1) Picea trees (EN, evergreen needleleaf); (2) Larix trees (DN, deciduous needleleaf); and (3) the mixed shrub layer (SH, shrubs). This data set consists of two sets of data files: (1) standard “3-day summary product files” for each camera and eachmore » ROI (i.e. vegetation type), characterizing vegetation color at a 3-day time step and (2) a “transition date file” containing the estimated “greenness rising” (spring) and “greenness falling” (autumn) transition dates.« less

  18. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  19. Using a thermistor flowmeter with attached video camera for monitoring sponge excurrent speed and oscular behaviour

    PubMed Central

    Jorgensen, Damien; Webster, Nicole S.; Pineda, Mari-Carmen; Duckworth, Alan

    2016-01-01

    A digital, four-channel thermistor flowmeter integrated with time-lapse cameras was developed as an experimental tool for measuring pumping rates in marine sponges, particularly those with small excurrent openings (oscula). Combining flowmeters with time-lapse imagery yielded valuable insights into the contractile behaviour of oscula in Cliona orientalis. Osculum cross-sectional area (OSA) was positively correlated to measured excurrent speeds (ES), indicating that sponge pumping and osculum contraction are coordinated behaviours. Both OSA and ES were positively correlated to pumping rate (Q). Diel trends in pumping activity and osculum contraction were also observed, with sponges increasing their pumping activity to peak at midday and decreasing pumping and contracting oscula at night. Short-term elevation of the suspended sediment concentration (SSC) within the seawater initially decreased pumping rates by up to 90%, ultimately resulting in closure of the oscula and cessation of pumping. PMID:27994973

  20. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  1. Low-altitude aerial color digital photographic survey of the San Andreas Fault

    USGS Publications Warehouse

    Lynch, David K.; Hudnut, Kenneth W.; Dearborn, David S.P.

    2010-01-01

    Ever since 1858, when Gaspard-Félix Tournachon (pen name Félix Nadar) took the first aerial photograph (Professional Aerial Photographers Association 2009), the scientific value and popular appeal of such pictures have been widely recognized. Indeed, Nadar patented the idea of using aerial photographs in mapmaking and surveying. Since then, aerial imagery has flourished, eventually making the leap to space and to wavelengths outside the visible range. Yet until recently, the availability of such surveys has been limited to technical organizations with significant resources. Geolocation required extensive time and equipment, and distribution was costly and slow. While these situations still plague older surveys, modern digital photography and lidar systems acquire well-calibrated and easily shared imagery, although expensive, platform-specific software is sometimes still needed to manage and analyze the data. With current consumer-level electronics (cameras and computers) and broadband internet access, acquisition and distribution of large imaging data sets are now possible for virtually anyone. In this paper we demonstrate a simple, low-cost means of obtaining useful aerial imagery by reporting two new, high-resolution, low-cost, color digital photographic surveys of selected portions of the San Andreas fault in California. All pictures are in standard jpeg format. The first set of imagery covers a 92-km-long section of the fault in Kern and San Luis Obispo counties and includes the entire Carrizo Plain. The second covers the region from Lake of the Woods to Cajon Pass in Kern, Los Angeles, and San Bernardino counties (151 km) and includes Lone Pine Canyon soon after the ground was largely denuded by the Sheep Fire of October 2009. The first survey produced a total of 1,454 oblique digital photographs (4,288 x 2,848 pixels, average 6 Mb each) and the second produced 3,762 nadir images from an elevation of approximately 150 m above ground level (AGL) on the southeast leg and 300 m AGL on the northwest leg. Spatial resolution (pixel size or ground sample distance) is a few centimeters. Time and geographic coordinates of the aircraft were automatically written into the exchangeable image file format (EXIF) data within each jpeg photograph. A few hours after acquisition and validation, the photographs were uploaded to a publically accessible Web page. The goal was to obtain quick-turnaround, low-cost, high-resolution, overlapping, and contiguous imagery for use in planning field operations, and to provide imagery for a wide variety of land use and educational studies. This work was carried out in support of ongoing geological research on the San Andreas fault, but the technique is widely applicable beyond geology.

  2. Data management and digital delivery of analog data

    USGS Publications Warehouse

    Miller, W.A.; Longhenry, Ryan; Smith, T.

    2008-01-01

    The U.S. Geological Survey's (USGS) data archive at the Earth Resources Observation and Science (EROS) Center is a comprehensive and impartial record of the Earth's changing land surface. USGS/EROS has been archiving and preserving land remote sensing data for over 35 years. This remote sensing archive continues to grow as aircraft and satellites acquire more imagery. As a world leader in preserving data, USGS/EROS has a reputation as a technological innovator in solving challenges and ensuring that access to these collections is available. Other agencies also call on the USGS to consider their collections for long-term archive support. To improve access to the USGS film archive, each frame on every roll of film is being digitized by automated high performance digital camera systems. The system robotically captures a digital image from each film frame for the creation of browse and medium resolution image files. Single frame metadata records are also created to improve access that otherwise involves interpreting flight indexes. USGS/EROS is responsible for over 8.6 million frames of aerial photographs and 27.7 million satellite images.

  3. Geology

    NASA Technical Reports Server (NTRS)

    Stewart, R. K.; Sabins, F. F., Jr.; Rowan, L. C.; Short, N. M.

    1975-01-01

    Papers from private industry reporting applications of remote sensing to oil and gas exploration were presented. Digitally processed LANDSAT images were successfully employed in several geologic interpretations. A growing interest in digital image processing among the geologic user community was shown. The papers covered a wide geographic range and a wide technical and application range. Topics included: (1) oil and gas exploration, by use of radar and multisensor studies as well as by use of LANDSAT imagery or LANDSAT digital data, (2) mineral exploration, by mapping from LANDSAT and Skylab imagery and by LANDSAT digital processing, (3) geothermal energy studies with Skylab imagery, (4) environmental and engineering geology, by use of radar or LANDSAT and Skylab imagery, (5) regional mapping and interpretation, and digital and spectral methods.

  4. Using Calibrated RGB Imagery from Low-Cost Uavs for Grassland Monitoring: Case Study at the Rengen Grassland Experiment (rge), Germany

    NASA Astrophysics Data System (ADS)

    Lussem, U.; Hollberg, J.; Menne, J.; Schellberg, J.; Bareth, G.

    2017-08-01

    Monitoring the spectral response of intensively managed grassland throughout the growing season allows optimizing fertilizer inputs by monitoring plant growth. For example, site-specific fertilizer application as part of precision agriculture (PA) management requires information within short time. But, this requires field-based measurements with hyper- or multispectral sensors, which may not be feasible on a day to day farming practice. Exploiting the information of RGB images from consumer grade cameras mounted on unmanned aerial vehicles (UAV) can offer cost-efficient as well as near-real time analysis of grasslands with high temporal and spatial resolution. The potential of RGB imagery-based vegetation indices (VI) from consumer grade cameras mounted on UAVs has been explored recently in several. However, for multitemporal analyses it is desirable to calibrate the digital numbers (DN) of RGB-images to physical units. In this study, we explored the comparability of the RGBVI from a consumer grade camera mounted on a low-cost UAV to well established vegetation indices from hyperspectral field measurements for applications in grassland. The study was conducted in 2014 on the Rengen Grassland Experiment (RGE) in Germany. Image DN values were calibrated into reflectance by using the Empirical Line Method (Smith & Milton 1999). Depending on sampling date and VI the correlation between the UAV-based RGBVI and VIs such as the NDVI resulted in varying R2 values from no correlation to up to 0.9. These results indicate, that calibrated RGB-based VIs have the potential to support or substitute hyperspectral field measurements to facilitate management decisions on grasslands.

  5. Computer output microfilm (FR80) systems software documentation, volume 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The system consists of a series of programs which convert digital data from magnetic tapes into alpha-numeric characters, graphic plots, and imagery that is recorded on the face of a cathode ray tube. A special camera photographs the face of the tube on microfilm for subsequent display on a film reader. The applicable documents which apply to this system are delineated. The functional relationship between the system software, the standard insert routines, and the applications programs is described; all the applications programs are described in detail. Instructions for locating those documents are presented along with test preparations sheets for all baseline and/or program modification acceptance tests.

  6. Increasing use of high-speed digital imagery as a measurement tool on test and evaluation ranges

    NASA Astrophysics Data System (ADS)

    Haddleton, Graham P.

    2001-04-01

    In military research and development or testing there are various fast and dangerous events that need to be recorded and analysed. High-speed cameras allow the capture of movement too fast to be recognised by the human eye, and provide data that is essential for the analysis and evaluation of such events. High-speed photography is often the only type of instrumentation that can be used to record the parameters demanded by our customers. I will show examples where this applied cinematography is used not only to provide a visual record of events, but also as an essential measurement tool.

  7. An introduction to the interim digital SAR processor and the characteristics of the associated Seasat SAR imagery

    NASA Technical Reports Server (NTRS)

    Wu, C.; Barkan, B.; Huneycutt, B.; Leang, C.; Pang, S.

    1981-01-01

    Basic engineering data regarding the Interim Digital SAR Processor (IDP) and the digitally correlated Seasat synthetic aperature radar (SAR) imagery are presented. The correlation function and IDP hardware/software configuration are described, and a preliminary performance assessment presented. The geometric and radiometric characteristics, with special emphasis on those peculiar to the IDP produced imagery, are described.

  8. Acquisition of airborne imagery in support of Deepwater Horizon oil spill recovery assessments

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.; Muller-Karger, Frank E.

    2012-09-01

    Remote sensing imagery was collected from a low flying aircraft along the near coastal waters of the Florida Panhandle and northern Gulf of Mexico and into Barataria Bay, Louisiana, USA, during March 2011. Imagery was acquired from an aircraft that simultaneously collected traditional photogrammetric film imagery, digital video, digital still images, and digital hyperspectral imagery. The original purpose of the project was to collect airborne imagery to support assessment of weathered oil in littoral areas influenced by the Deepwater Horizon oil and gas spill that occurred during the spring and summer of 2010. This paper describes the data acquired and presents information that demonstrates the utility of small spatial scale imagery to detect the presence of weathered oil along littoral areas in the northern Gulf of Mexico. Flight tracks and examples of imagery collected are presented and methods used to plan and acquire the imagery are described. Results suggest weathered oil in littoral areas after the spill was contained at the source.

  9. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  10. Tracking vegetation phenology across diverse North American biomes using PhenoCam imagery

    DOE PAGES

    Richardson, Andrew D.; Hufkens, Koen; Milliman, Tom; ...

    2018-03-13

    Vegetation phenology controls the seasonality of many ecosystem processes, as well as numerous biosphere-atmosphere feedbacks. Phenology is also highly sensitive to climate change and variability. Here we present a series of datasets, together consisting of almost 750 years of observations, characterizing vegetation phenology in diverse ecosystems across North America. Our data are derived from conventional, visible-wavelength, automated digital camera imagery collected through the PhenoCam network. For each archived image, we extracted RGB (red, green, blue) colour channel information, with means and other statistics calculated across a region-of-interest (ROI) delineating a specific vegetation type. From the high-frequency (typically, 30 min) imagery,more » we derived time series characterizing vegetation colour, including "canopy greenness", processed to 1- and 3-day intervals. For ecosystems with one or more annual cycles of vegetation activity, we provide estimates, with uncertainties, for the start of the "greenness rising" and end of the "greenness falling" stages. Lastly, the database can be used for phenological model validation and development, evaluation of satellite remote sensing data products, benchmarking earth system models, and studies of climate change impacts on terrestrial ecosystems.« less

  11. Tracking vegetation phenology across diverse North American biomes using PhenoCam imagery

    PubMed Central

    Richardson, Andrew D.; Hufkens, Koen; Milliman, Tom; Aubrecht, Donald M.; Chen, Min; Gray, Josh M.; Johnston, Miriam R.; Keenan, Trevor F.; Klosterman, Stephen T.; Kosmala, Margaret; Melaas, Eli K.; Friedl, Mark A.; Frolking, Steve

    2018-01-01

    Vegetation phenology controls the seasonality of many ecosystem processes, as well as numerous biosphere-atmosphere feedbacks. Phenology is also highly sensitive to climate change and variability. Here we present a series of datasets, together consisting of almost 750 years of observations, characterizing vegetation phenology in diverse ecosystems across North America. Our data are derived from conventional, visible-wavelength, automated digital camera imagery collected through the PhenoCam network. For each archived image, we extracted RGB (red, green, blue) colour channel information, with means and other statistics calculated across a region-of-interest (ROI) delineating a specific vegetation type. From the high-frequency (typically, 30 min) imagery, we derived time series characterizing vegetation colour, including “canopy greenness”, processed to 1- and 3-day intervals. For ecosystems with one or more annual cycles of vegetation activity, we provide estimates, with uncertainties, for the start of the “greenness rising” and end of the “greenness falling” stages. The database can be used for phenological model validation and development, evaluation of satellite remote sensing data products, benchmarking earth system models, and studies of climate change impacts on terrestrial ecosystems. PMID:29533393

  12. Tracking vegetation phenology across diverse North American biomes using PhenoCam imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, Andrew D.; Hufkens, Koen; Milliman, Tom

    Vegetation phenology controls the seasonality of many ecosystem processes, as well as numerous biosphere-atmosphere feedbacks. Phenology is also highly sensitive to climate change and variability. Here we present a series of datasets, together consisting of almost 750 years of observations, characterizing vegetation phenology in diverse ecosystems across North America. Our data are derived from conventional, visible-wavelength, automated digital camera imagery collected through the PhenoCam network. For each archived image, we extracted RGB (red, green, blue) colour channel information, with means and other statistics calculated across a region-of-interest (ROI) delineating a specific vegetation type. From the high-frequency (typically, 30 min) imagery,more » we derived time series characterizing vegetation colour, including "canopy greenness", processed to 1- and 3-day intervals. For ecosystems with one or more annual cycles of vegetation activity, we provide estimates, with uncertainties, for the start of the "greenness rising" and end of the "greenness falling" stages. Lastly, the database can be used for phenological model validation and development, evaluation of satellite remote sensing data products, benchmarking earth system models, and studies of climate change impacts on terrestrial ecosystems.« less

  13. Tracking vegetation phenology across diverse North American biomes using PhenoCam imagery

    NASA Astrophysics Data System (ADS)

    Richardson, Andrew D.; Hufkens, Koen; Milliman, Tom; Aubrecht, Donald M.; Chen, Min; Gray, Josh M.; Johnston, Miriam R.; Keenan, Trevor F.; Klosterman, Stephen T.; Kosmala, Margaret; Melaas, Eli K.; Friedl, Mark A.; Frolking, Steve

    2018-03-01

    Vegetation phenology controls the seasonality of many ecosystem processes, as well as numerous biosphere-atmosphere feedbacks. Phenology is also highly sensitive to climate change and variability. Here we present a series of datasets, together consisting of almost 750 years of observations, characterizing vegetation phenology in diverse ecosystems across North America. Our data are derived from conventional, visible-wavelength, automated digital camera imagery collected through the PhenoCam network. For each archived image, we extracted RGB (red, green, blue) colour channel information, with means and other statistics calculated across a region-of-interest (ROI) delineating a specific vegetation type. From the high-frequency (typically, 30 min) imagery, we derived time series characterizing vegetation colour, including “canopy greenness”, processed to 1- and 3-day intervals. For ecosystems with one or more annual cycles of vegetation activity, we provide estimates, with uncertainties, for the start of the “greenness rising” and end of the “greenness falling” stages. The database can be used for phenological model validation and development, evaluation of satellite remote sensing data products, benchmarking earth system models, and studies of climate change impacts on terrestrial ecosystems.

  14. Human Settlements in the South-Central U.S., Viewed at Night from the International Space Station

    NASA Technical Reports Server (NTRS)

    Dawson, Melissa; Evans, Cynthia; Stefanov, William; Wilkinson, M. Justin; Willis, Kimberly; Runco, Susan

    2012-01-01

    A recent innovation of astronauts observing Earth from the International Space Station (ISS) is documenting human footprints by photographing city lights at night time. One of the earliest night-time images from the ISS was the US-Mexico border at El Paso-Ciudad Juarez. The colors, patterns and density of city lights document the differences in the cultural settlement patterns across the border region, as well as within the urban areas themselves. City lights help outline the most populated areas in settlements around the world, and can be used to explore relative population densities, changing patterns of urban/suburban development, transportation networks, spatial relationship to geographic features, and more. The data also provides insight into parameters such as surface roughness for input into local and regional climate modeling and studies of light pollution. The ground resolution of night-time astronaut photography from the ISS is typically an order of magnitude greater than current Defense Meteorological Satellite Program (DMSP) data, and therefore can serve as a "zoom lens" for selected urban areas. Current handheld digital cameras in use on the ISS, optimized for greater light sensitivity, provide opportunities to obtain new detailed imagery of atmospheric phenomena such as airglow, aurora, and noctilucent clouds in addition to documenting urban patterns. ISS astronauts have taken advantage of increasingly sensitive digital cameras to document the world at night in unprecedented detail. In addition, the capability to obtain time-lapse imagery from fixed cameras has been exploited to produce dynamic videos of both changing surface patterns around the world and atmospheric phenomena. We will profile some spectacular images of human settlements over the South-Central U.S., and contrast with other images from around the world. More data can be viewed at http://eol.jsc.nasa.gov/Videos/CrewEarthObservationsVideos/. US-Mexico border is obvious by the different lighting pattern. Not surprisingly, the densely illuminated city of Juarez indicates a higher population; El Paso's smaller population is spread out over a larger area.

  15. The International Space Station Supports International Polar Year (IPY)

    NASA Technical Reports Server (NTRS)

    Evans, Cynthia A.; Pettit, Donald R.

    2007-01-01

    Every day, ISS astronauts photograph designated sites and dynamic events on the Earth's surface using digital cameras equipped with a variety of lenses. Depending on observation parameters, astronauts can collect high resolution (4-6 m pixel size) or synoptic views (lower resolution but covering very large areas) digital data in 3 (red-green-blue) color bands. ISS crews have daily opportunities to document a variety of high-latitude phenomena. Although lighting conditions, ground track and other viewing parameters change with orbital precessions and season, the 51.6o orbital inclination and 400 km altitude of the ISS provide the crew an unique vantage point for collecting image-based data of polar phenomena, including surface observations to roughly 65o latitude, and upper atmospheric observations that reach nearly to the poles. During the 2007-2009 timeframe of the IPY, polar observations will become a scientific focus for the CEO experiment; the experiment is designated ISS-IPY. We solicit requests from scientists for observations from the ISS that are coordinated with or complement ground-based polar studies. The CEO imagery website for ISS-IPY provides an on-line form that allows IPY investigators to interact with CEO scientists and define their imagery requests. This information is integrated into daily communications with the ISS astronauts about their Earth Observations targets. All data collected are cataloged and posted on the website for downloading and assimilation into IPY projects. Examples of imagery and detailed information about scientific observations from the ISS can also be downloaded from the ISS-IPY web site.

  16. Mapping surface disturbance of energy-related infrastructure in southwest Wyoming--An assessment of methods

    USGS Publications Warehouse

    Germaine, Stephen S.; O'Donnell, Michael S.; Aldridge, Cameron L.; Baer, Lori; Fancher, Tammy; McBeth, Jamie; McDougal, Robert R.; Waltermire, Robert; Bowen, Zachary H.; Diffendorfer, James; Garman, Steven; Hanson, Leanne

    2012-01-01

    We evaluated how well three leading information-extraction software programs (eCognition, Feature Analyst, Feature Extraction) and manual hand digitization interpreted information from remotely sensed imagery of a visually complex gas field in Wyoming. Specifically, we compared how each mapped the area of and classified the disturbance features present on each of three remotely sensed images, including 30-meter-resolution Landsat, 10-meter-resolution SPOT (Satellite Pour l'Observation de la Terre), and 0.6-meter resolution pan-sharpened QuickBird scenes. Feature Extraction mapped the spatial area of disturbance features most accurately on the Landsat and QuickBird imagery, while hand digitization was most accurate on the SPOT imagery. Footprint non-overlap error was smallest on the Feature Analyst map of the Landsat imagery, the hand digitization map of the SPOT imagery, and the Feature Extraction map of the QuickBird imagery. When evaluating feature classification success against a set of ground-truthed control points, Feature Analyst, Feature Extraction, and hand digitization classified features with similar success on the QuickBird and SPOT imagery, while eCognition classified features poorly relative to the other methods. All maps derived from Landsat imagery classified disturbance features poorly. Using the hand digitized QuickBird data as a reference and making pixel-by-pixel comparisons, Feature Extraction classified features best overall on the QuickBird imagery, and Feature Analyst classified features best overall on the SPOT and Landsat imagery. Based on the entire suite of tasks we evaluated, Feature Extraction performed best overall on the Landsat and QuickBird imagery, while hand digitization performed best overall on the SPOT imagery, and eCognition performed worst overall on all three images. Error rates for both area measurements and feature classification were prohibitively high on Landsat imagery, while QuickBird was time and cost prohibitive for mapping large spatial extents. The SPOT imagery produced map products that were far more accurate than Landsat and did so at a far lower cost than QuickBird imagery. Consideration of degree of map accuracy required, costs associated with image acquisition, software, operator and computation time, and tradeoffs in the form of spatial extent versus resolution should all be considered when evaluating which combination of imagery and information-extraction method might best serve any given land use mapping project. When resources permit, attaining imagery that supports the highest classification and measurement accuracy possible is recommended.

  17. Integrated thermal infrared imaging and Structure-from-Motion photogrametry to map apparent temperature and radiant hydrothermal heat flux at Mammoth Mountain, CA USA

    USGS Publications Warehouse

    Lewis, Aaron; George Hilley,; Lewicki, Jennifer L.

    2015-01-01

    This work presents a method to create high-resolution (cm-scale) orthorectified and georeferenced maps of apparent surface temperature and radiant hydrothermal heat flux and estimate the radiant hydrothermal heat emission rate from a study area. A ground-based thermal infrared (TIR) camera was used to collect (1) a set of overlapping and offset visible imagery around the study area during the daytime and (2) time series of co-located visible and TIR imagery at one or more sites within the study area from pre-dawn to daytime. Daytime visible imagery was processed using the Structure-from-Motion photogrammetric method to create a digital elevation model onto which pre-dawn TIR imagery was orthorectified and georeferenced. Three-dimensional maps of apparent surface temperature and radiant hydrothermal heat flux were then visualized and analyzed from various computer platforms (e.g., Google Earth, ArcGIS). We demonstrate this method at the Mammoth Mountain fumarole area on Mammoth Mountain, CA. Time-averaged apparent surface temperatures and radiant hydrothermal heat fluxes were observed up to 73.7 oC and 450 W m-2, respectively, while the estimated radiant hydrothermal heat emission rate from the area was 1.54 kW. Results should provide a basis for monitoring potential volcanic unrest and mitigating hydrothermal heat-related hazards on the volcano.

  18. Employing airborne multispectral digital imagery to map Brazilian pepper infestation in south Texas.

    USDA-ARS?s Scientific Manuscript database

    A study was conducted in south Texas to determine the feasibility of using airborne multispectral digital imagery for differentiating the invasive plant Brazilian pepper (Schinus terebinthifolius) from other cover types. Imagery obtained in the visible, near infrared, and mid infrared regions of th...

  19. Specification and preliminary design of the CARTA system for satellite cartography

    NASA Technical Reports Server (NTRS)

    Machadoesilva, A. J. F. (Principal Investigator); Neto, G. C.; Serra, P. R. M.; Souza, R. C. M.; Mitsuo, Fernando Augusta, II

    1984-01-01

    Digital imagery acquired by satellite have inherent geometrical distortion due to sensor characteristics and to platform variations. In INPE a software system for geometric correction of LANDSAT MSS imagery is under development. Such connected imagery will be useful for map generation. Important examples are the generation of LANDSAT image-charts for the Amazon region and the possibility of integrating digital satellite imagery into a Geographic Information System.

  20. Large-scale feature searches of collections of medical imagery

    NASA Astrophysics Data System (ADS)

    Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.

    1993-09-01

    Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.

  1. OSIRIS-REx Asteroid Sample Return Mission Image Analysis

    NASA Astrophysics Data System (ADS)

    Chevres Fernandez, Lee Roger; Bos, Brent

    2018-01-01

    NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding regarding the functionality of the camera system, which will in turn aid in the fly-down to the asteroid, as it will allow the pick of a suitable landing and sample location.

  2. An interactive toolkit to extract phenological time series data from digital repeat photography

    NASA Astrophysics Data System (ADS)

    Seyednasrollah, B.; Milliman, T. E.; Hufkens, K.; Kosmala, M.; Richardson, A. D.

    2017-12-01

    Near-surface remote sensing and in situ photography are powerful tools to study how climate change and climate variability influence vegetation phenology and the associated seasonal rhythms of green-up and senescence. The rapidly-growing PhenoCam network has been using in situ digital repeat photography to study phenology in almost 500 locations around the world, with an emphasis on North America. However, extracting time series data from multiple years of half-hourly imagery - while each set of images may contain several regions of interest (ROI's), corresponding to different species or vegetation types - is not always straightforward. Large volumes of data require substantial processing time, and changes (either intentional or accidental) in camera field of view requires adjustment of ROI masks. Here, we introduce and present "DrawROI" as an interactive web-based application for imagery from PhenoCam. DrawROI can also be used offline, as a fully independent toolkit that significantly facilitates extraction of phenological data from any stack of digital repeat photography images. DrawROI provides a responsive environment for phenological scientists to interactively a) delineate ROIs, b) handle field of view (FOV) shifts, and c) extract and export time series data characterizing image color (i.e. red, green and blue channel digital numbers for the defined ROI). The application utilizes artificial intelligence and advanced machine learning techniques and gives user the opportunity to redraw new ROIs every time an FOV shift occurs. DrawROI also offers a quality control flag to indicate noisy data and images with low quality due to presence of foggy weather or snow conditions. The web-based application significantly accelerates the process of creating new ROIs and modifying pre-existing ROI in the PhenoCam database. The offline toolkit is presented as an open source R-package that can be used with similar datasets with time-lapse photography to obtain more data for studying phenology for a large community of ecologists. We will illustrate the use of the toolkit using imagery from a selection of sites within the National Ecological Observatory Network (NEON).

  3. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  4. The effect of flight altitude to data quality of fixed-wing UAV imagery: case study in Murcia, Spain

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Keesstra, Saskia; Cammeraat, Erik

    2014-05-01

    Unmanned Aerial System (UAS) are becoming popular tools in the geosciences due to improving technology and processing techniques. They can potentially fill the gap between spaceborne or manned aircraft remote sensing and terrestrial remote sensing, both in terms of spatial and temporal resolution. In this study we tested a fixed-wing Unmanned Aerial System (UAS) for the application of digital landscape analysis. The focus was to analyze the effect of flight altitude and the effect to accuracy and detail of the produced digital elevation models, derived terrain properties and orthophotos. The aircraft was equipped with a Panasonic GX1 16MP pocket camera with 20 mm lens to capture normal JPEG RGB images. Images were processed using Agisoft Photoscan Pro which includes the structure-from-motion and multiview stereopsis algorithms. The test area consisted of small abandoned agricultural fields in semi-arid Murcia in southeastern Spain. The area was severely damaged after a destructive rainfall event, including damaged check dams, rills, deep gully incisions and piping. Results suggest that careful decisions on flight altitude are essential to find a balance between the area coverage, ground sampling distance, UAS ground speed, camera processing speed and the accurate registration of specific soil erosion features of interest.

  5. Urban cover mapping using digital, high-resolution aerial imagery

    Treesearch

    Soojeong Myeong; David J. Nowak; Paul F. Hopkins; Robert H. Brock

    2003-01-01

    High-spatial resolution digital color-infrared aerial imagery of Syracuse, NY was analyzed to test methods for developing land cover classifications for an urban area. Five cover types were mapped: tree/shrub, grass/herbaceous, bare soil, water and impervious surface. Challenges in high-spatial resolution imagery such as shadow effect and similarity in spectral...

  6. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  7. The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.

    2014-04-01

    The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.

  8. HERCULES/MSI: a multispectral imager with geolocation for STS-70

    NASA Astrophysics Data System (ADS)

    Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta

    1995-11-01

    A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.

  9. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  10. EAARL Coastal Topography and Imagery-Assateague Island National Seashore, Maryland and Virginia, Post-Nor'Ida, 2009

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Klipp, E.S.; Vivekanandan, Saisudha; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Assateague Island National Seashore in Maryland and Virginia, acquired post-Nor'Ida (November 2009 nor'easter) on November 28 and 30, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar(EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  11. EAARL coastal topography and imagery-Fire Island National Seashore, New York, 2009

    USGS Publications Warehouse

    Vivekanandan, Saisudha; Klipp, E.S.; Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Fire Island National Seashore in New York, acquired on July 9 and August 3, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  12. EAARL Coastal Topography and Imagery-Naval Live Oaks Area, Gulf Islands National Seashore, Florida, 2007

    USGS Publications Warehouse

    Nagle, David B.; Nayegandhi, Amar; Yates, Xan; Brock, John C.; Wright, C. Wayne; Bonisteel, Jamie M.; Klipp, Emily S.; Segura, Martha

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) topography, first-surface (FS) topography, and canopy-height (CH) datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Science Center, St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Naval Live Oaks Area in Florida's Gulf Islands National Seashore, acquired June 30, 2007. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  13. How much camera separation should be used for the capture and presentation of 3D stereoscopic imagery on binocular HMDs?

    NASA Astrophysics Data System (ADS)

    McIntire, John; Geiselman, Eric; Heft, Eric; Havig, Paul

    2011-06-01

    Designers, researchers, and users of binocular stereoscopic head- or helmet-mounted displays (HMDs) face the tricky issue of what imagery to present in their particular displays, and how to do so effectively. Stereoscopic imagery must often be created in-house with a 3D graphics program or from within a 3D virtual environment, or stereoscopic photos/videos must be carefully captured, perhaps for relaying to an operator in a teleoperative system. In such situations, the question arises as to what camera separation (real or virtual) is appropriate or desirable for end-users and operators. We review some of the relevant literature regarding the question of stereo pair camera separation using deskmounted or larger scale stereoscopic displays, and employ our findings to potential HMD applications, including command & control, teleoperation, information and scientific visualization, and entertainment.

  14. Digitized Photography: What You Can Do with It.

    ERIC Educational Resources Information Center

    Kriss, Jack

    1997-01-01

    Discusses benefits of digital cameras which allow users to take a picture, store it on a digital disk, and manipulate/export these photos to a print document, Web page, or multimedia presentation. Details features of digital cameras and discusses educational uses. A sidebar presents prices and other information for 12 digital cameras. (AEF)

  15. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    NASA Astrophysics Data System (ADS)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

  16. Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs

    NASA Astrophysics Data System (ADS)

    Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.

    2017-08-01

    In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.

  17. Landsat 3 return beam vidicon response artifacts

    USGS Publications Warehouse

    ,; Clark, B.

    1981-01-01

    The return beam vidicon (RBV) sensing systems employed aboard Landsats 1, 2, and 3 have all been similar in that they have utilized vidicon tube cameras. These are not mirror-sweep scanning devices such as the multispectral scanner (MSS) sensors that have also been carried aboard the Landsat satellites. The vidicons operate more like common television cameras, using an electron gun to read images from a photoconductive faceplate.In the case of Landsats 1 and 2, the RBV system consisted of three such vidicons which collected remote sensing data in three distinct spectral bands. Landsat 3, however, utilizes just two vidicon cameras, both of which sense data in a single broad band. The Landsat 3 RBV system additionally has a unique configuration. As arranged, the two cameras can be shuttered alternately, twice each, in the same time it takes for one MSS scene to be acquired. This shuttering sequence results in four RBV "subscenes" for every MSS scene acquired, similar to the four quadrants of a square. See Figure 1. Each subscene represents a ground area of approximately 98 by 98 km. The subscenes are designated A, B, C, and D, for the northwest, northeast, southwest, and southeast quarters of the full scene, respectively. RBV data products are normally ordered, reproduced, and sold on a subscene basis and are in general referred to in this way. Each exposure from the RBV camera system presents an image which is 98 km on a side. When these analog video data are subsequently converted to digital form, the picture element, or pixel, that results is 19 m on a side with an effective resolution element of 30 m. This pixel size is substantially smaller than that obtainable in MSS images (the MSS has an effective resolution element of 73.4 m), and, when RBV images are compared to equivalent MSS images, better resolution in the RBV data is clearly evident. It is for this reason that the RBV system can be a valuable tool for remote sensing of earth resources.Until recently, RBV imagery was processed directly from wideband video tape data onto 70-mm film. This changed in September 1980 when digital production of RBV data at the NASA Goddard Space Flight Center (GSFC) began. The wideband video tape data are now subjected to analog-to-digital preprocessing and corrected both radiometrically and geometrically to produce high-density digital tapes (HDT's). The HDT data are subsequently transmitted via satellite (Domsat) to the EROS Data Center (EDC) where they are used to generate 241-mm photographic images at a scale of 1:500,000. Computer-compatible tapes of the data are also generated as digital products. Of the RBV data acquired since September 1, 1980, approximately 2,800 subscenes per month have been processed at EDC.

  18. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  19. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.

  20. Electronographic cameras for space astronomy.

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1972-01-01

    Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.

  1. Assessing the consistency of UAV-derived point clouds and images acquired at different altitudes

    NASA Astrophysics Data System (ADS)

    Ozcan, O.

    2016-12-01

    Unmanned Aerial Vehicles (UAVs) offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and satellite remote sensing system. Nowadays, UAVs that bridge the gap between the satellite scale and field scale applications were initiated to be used in various application areas to acquire hyperspatial and high temporal resolution imageries due to working capacity and acquiring in a short span of time with regard to conventional photogrammetry methods. UAVs have been used for various fields such as for the creation of 3-D earth models, production of high resolution orthophotos, network planning, field monitoring and agricultural lands as well. Thus, geometric accuracy of orthophotos and volumetric accuracy of point clouds are of capital importance for land surveying applications. Correspondingly, Structure from Motion (SfM) photogrammetry, which is frequently used in conjunction with UAV, recently appeared in environmental sciences as an impressive tool allowing for the creation of 3-D models from unstructured imagery. In this study, it was aimed to reveal the spatial accuracy of the images acquired from integrated digital camera and the volumetric accuracy of Digital Surface Models (DSMs) which were derived from UAV flight plans at different altitudes using SfM methodology. Low-altitude multispectral overlapping aerial photography was collected at the altitudes of 30 to 100 meters and georeferenced with RTK-GPS ground control points. These altitudes allow hyperspatial imagery with the resolutions of 1-5 cm depending upon the sensor being used. Preliminary results revealed that the vertical comparison of UAV-derived point clouds with respect to GPS measurements pointed out an average distance at cm-level. Larger values are found in areas where instantaneous changes in surface are present.

  2. Geometric correction and digital elevation extraction using multiple MTI datasets

    USGS Publications Warehouse

    Mercier, Jeffrey A.; Schowengerdt, Robert A.; Storey, James C.; Smith, Jody L.

    2007-01-01

    Digital Elevation Models (DEMs) are traditionally acquired from a stereo pair of aerial photographs sequentially captured by an airborne metric camera. Standard DEM extraction techniques can be naturally extended to satellite imagery, but the particular characteristics of satellite imaging can cause difficulties. The spacecraft ephemeris with respect to the ground site during image collects is the most important factor in the elevation extraction process. When the angle of separation between the stereo images is small, the extraction process typically produces measurements with low accuracy, while a large angle of separation can cause an excessive number of erroneous points in the DEM from occlusion of ground areas. The use of three or more images registered to the same ground area can potentially reduce these problems and improve the accuracy of the extracted DEM. The pointing capability of some sensors, such as the Multispectral Thermal Imager (MTI), allows for multiple collects of the same area from different perspectives. This functionality of MTI makes it a good candidate for the implementation of a DEM extraction algorithm using multiple images for improved accuracy. Evaluation of this capability and development of algorithms to geometrically model the MTI sensor and extract DEMs from multi-look MTI imagery are described in this paper. An RMS elevation error of 6.3-meters is achieved using 11 ground test points, while the MTI band has a 5-meter ground sample distance.

  3. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  4. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  5. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  6. SLR digital camera for forensic photography

    NASA Astrophysics Data System (ADS)

    Har, Donghwan; Son, Youngho; Lee, Sungwon

    2004-06-01

    Forensic photography, which was systematically established in the late 19th century by Alphonse Bertillon of France, has developed a lot for about 100 years. The development will be more accelerated with the development of high technologies, in particular the digital technology. This paper reviews three studies to answer the question: Can the SLR digital camera replace the traditional silver halide type ultraviolet photography and infrared photography? 1. Comparison of relative ultraviolet and infrared sensitivity of SLR digital camera to silver halide photography. 2. How much ultraviolet or infrared sensitivity is improved when removing the UV/IR cutoff filter built in the SLR digital camera? 3. Comparison of relative sensitivity of CCD and CMOS for ultraviolet and infrared. The test result showed that the SLR digital camera has a very low sensitivity for ultraviolet and infrared. The cause was found to be the UV/IR cutoff filter mounted in front of the image sensor. Removing the UV/IR cutoff filter significantly improved the sensitivity for ultraviolet and infrared. Particularly for infrared, the sensitivity of the SLR digital camera was better than that of the silver halide film. This shows the possibility of replacing the silver halide type ultraviolet photography and infrared photography with the SLR digital camera. Thus, the SLR digital camera seems to be useful for forensic photography, which deals with a lot of ultraviolet and infrared photographs.

  7. Forest Stand Canopy Structure Attribute Estimation from High Resolution Digital Airborne Imagery

    Treesearch

    Demetrios Gatziolis

    2006-01-01

    A study of forest stand canopy variable assessment using digital, airborne, multispectral imagery is presented. Variable estimation involves stem density, canopy closure, and mean crown diameter, and it is based on quantification of spatial autocorrelation among pixel digital numbers (DN) using variogram analysis and an alternative, non-parametric approach known as...

  8. Analysis of GOES imagery and digitized data for the SEV-UPS period, August 1979

    NASA Technical Reports Server (NTRS)

    Bowley, C. J.; Burke, H. H. K.; Barnes, J. C.

    1981-01-01

    In support of the Southeastern Virginia Urban Plume Study (SEV-UPS), GOES satellite imagery was analyzed for the month of August 1979. The analyzed GOES images provide an additional source of meteorological input useful in the evaluation of air quality data collected during the month long period of the SEV-UPS experiment. In addition to the imagery analysis, GOES digitized data were analyzed for the period of August 6 to 11, during which a regional haze pattern was detectable in the imagery. The results of the study indicate that the observed haze patterns correspond closely with areas shown in surface based measurements to have reduced visibilities and elevated pollution levels. Moreover, the results of the analysis of digitized data indicate that digital reflectance counts can be directly related to haze intensity both over land and ocean. The model results agree closely with the observed GOES digital reflectance counts, providing further indication that satellite remote sensing can be a useful tool for monitoring regional elevated pollution episodes.

  9. Comparison of aerial imagery from manned and unmanned aircraft platforms for monitoring cotton growth

    USDA-ARS?s Scientific Manuscript database

    Unmanned aircraft systems (UAS) have emerged as a low-cost and versatile remote sensing platform in recent years, but little work has been done on comparing imagery from manned and unmanned platforms for crop assessment. The objective of this study was to compare imagery taken from multiple cameras ...

  10. Detection of rice sheath blight using an unmanned aerial system with high-resolution color and multispectral imaging.

    PubMed

    Zhang, Dongyan; Zhou, Xingen; Zhang, Jian; Lan, Yubin; Xu, Chao; Liang, Dong

    2018-01-01

    Detection and monitoring are the first essential step for effective management of sheath blight (ShB), a major disease in rice worldwide. Unmanned aerial systems have a high potential of being utilized to improve this detection process since they can reduce the time needed for scouting for the disease at a field scale, and are affordable and user-friendly in operation. In this study, a commercialized quadrotor unmanned aerial vehicle (UAV), equipped with digital and multispectral cameras, was used to capture imagery data of research plots with 67 rice cultivars and elite lines. Collected imagery data were then processed and analyzed to characterize the development of ShB and quantify different levels of the disease in the field. Through color features extraction and color space transformation of images, it was found that the color transformation could qualitatively detect the infected areas of ShB in the field plots. However, it was less effective to detect different levels of the disease. Five vegetation indices were then calculated from the multispectral images, and ground truths of disease severity and GreenSeeker measured NDVI (Normalized Difference Vegetation Index) were collected. The results of relationship analyses indicate that there was a strong correlation between ground-measured NDVIs and image-extracted NDVIs with the R2 of 0.907 and the root mean square error (RMSE) of 0.0854, and a good correlation between image-extracted NDVIs and disease severity with the R2 of 0.627 and the RMSE of 0.0852. Use of image-based NDVIs extracted from multispectral images could quantify different levels of ShB in the field plots with an accuracy of 63%. These results demonstrate that a customer-grade UAV integrated with digital and multispectral cameras can be an effective tool to detect the ShB disease at a field scale.

  11. Feasibility evaluation and study of adapting the attitude reference system to the Orbiter camera payload system's large format camera

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.

  12. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  13. Evaluation of modified portable digital camera for screening of diabetic retinopathy.

    PubMed

    Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi

    2009-01-01

    To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.

  14. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Ye, W.; Qiao, G.; Kong, F.; Guo, S.; Ma, X.; Tong, X.; Li, R.

    2016-06-01

    Global climate change is one of the major challenges that all nations are commonly facing. Long-term observations of the Antarctic ice sheet have been playing a critical role in quantitatively estimating and predicting effects resulting from the global changes. The film-based ARGON reconnaissance imagery provides a remarkable data source for studying the Antarctic ice-sheet in 1960s, thus greatly extending the time period of Antarctica surface observations. To deal with the low-quality images and the unavailability of camera poses, a systematic photogrammetric approach is proposed to reconstruct the interior and exterior orientation information for further glacial mapping applications, including ice flow velocity mapping and mass balance estimation. Some noteworthy details while performing geometric modelling using the ARGON images were introduced, including methods and results for handling specific effects of film deformation, damaged or missing fiducial marks and calibration report, automatic fiducial mark detection, control point selection through Antarctic shadow and ice surface terrain analysis, and others. Several sites in East Antarctica were tested. As an example, four images in the Byrd glacier region were used to assess the accuracy of the geometric modelling. A digital elevation model (DEM) and an orthophoto map of Byrd glacier were generated. The accuracy of the ground positions estimated by using independent check points is within one nominal pixel of 140 m of ARGON imagery. Furthermore, a number of significant features, such as ice flow velocity and regional change patterns, will be extracted and analysed.

  15. Strengthened IAEA Safeguards-Imagery Analysis: Geospatial Tools for Nonproliferation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pabian, Frank V

    2012-08-14

    This slide presentation focuses on the growing role and importance of imagery analysis for IAEA safeguards applications and how commercial satellite imagery, together with the newly available geospatial tools, can be used to promote 'all-source synergy.' As additional sources of openly available information, satellite imagery in conjunction with the geospatial tools can be used to significantly augment and enhance existing information gathering techniques, procedures, and analyses in the remote detection and assessment of nonproliferation relevant activities, facilities, and programs. Foremost of the geospatial tools are the 'Digital Virtual Globes' (i.e., GoogleEarth, Virtual Earth, etc.) that are far better than previouslymore » used simple 2-D plan-view line drawings for visualization of known and suspected facilities of interest which can be critical to: (1) Site familiarization and true geospatial context awareness; (2) Pre-inspection planning; (3) Onsite orientation and navigation; (4) Post-inspection reporting; (5) Site monitoring over time for changes; (6) Verification of states site declarations and for input to State Evaluation reports; and (7) A common basis for discussions among all interested parties (Member States). Additionally, as an 'open-source', such virtual globes can also provide a new, essentially free, means to conduct broad area search for undeclared nuclear sites and activities - either alleged through open source leads; identified on internet BLOGS and WIKI Layers, with input from a 'free' cadre of global browsers and/or by knowledgeable local citizens (a.k.a.: 'crowdsourcing'), that can include ground photos and maps; or by other initiatives based on existing information and in-house country knowledge. They also provide a means to acquire ground photography taken by locals, hobbyists, and tourists of the surrounding locales that can be useful in identifying and discriminating between relevant and non-relevant facilities and their associated infrastructure. The digital globes also provide highly accurate terrain mapping for better geospatial context and allow detailed 3-D perspectives of all sites or areas of interest. 3-D modeling software (i.e., Google's SketchUp6 newly available in 2007) when used in conjunction with these digital globes can significantly enhance individual building characterization and visualization (including interiors), allowing for better assessments including walk-arounds or fly-arounds and perhaps better decision making on multiple levels (e.g., the best placement for International Atomic Energy Agency (IAEA) video monitoring cameras).« less

  16. Structure from Motion (SfM) photogrammetry applied to historical imagery: plug & play?

    NASA Astrophysics Data System (ADS)

    Bakker, Maarten; Lane, Stuart N.

    2017-04-01

    The development of Structure from Motion (SfM) photogrammetry has led to a vast increase and expansion of geomorphological applications. Highly detailed Digital Elevation Models (DEMs) can be efficiently generated from a variety of platforms that cover a large range of spatial scales. For the application of DEMs in geomorphic change analysis, precision and spatial resolution are not of sole importance, but also their accuracy, temporal resolution and temporal coverage. The use of archival imagery may substantially lengthen temporal coverage, allowing quantification of annual to decadal scale landform change. Whilst archival photogrammetry is not new, a question arises as to how applicable SfM methods are as a more cost-effective and straightforward alternative to the conventional approach. Here, we studied a relatively extreme case where we applied SfM techniques to archival aerial imagery, to investigate the decadal evolution of a low relief braided river. The Borgne is an Alpine river in south-west Switzerland which is strongly affected by flow abstraction for hydropower, allowing the fairly straightforward application of photogrammetry on the near-dry river bed. For 8 sets of scanned historical aerial images in the period 1959-2005 we performed Ground Control Point (GCP) assisted bundle adjustment using both classical archival digital photogrammetry (used as a reference dataset) and SfM based photogrammetry. For the SfM method, no further data were used to constrain camera or exterior orientation parameters a priori, but instead we used these for a posteriori verification. The resulting densified point clouds were registered onto a reference surface based on stable areas, allowing the correction for any systematic error in DEMs that may arise from (random) error in the bundle adjustment. The obtained results show that the quality of the SfM based bundle adjustment is similar to that of the classical photogrammetric approach. Next to image scale, the quality is strongly driven by ability of computer vision techniques to extract tie-points, which is controlled by image texture (quantified here using entropy) and image overlap (redundancy). Depending on the used image set, these characteristics may therefore be effectively exploited or pose a limitation for application. The quality of the results aside, we found that the recovered bundle adjustment parameters were not necessarily correct and that there was the possibility for a trade-off, between estimated focal length and camera flying height for example, such that the right results were obtained if not for the right reasons. This highlights the need to assess camera and exterior orientation parameters, and to address systematic errors that may evolve from this. For the latter, we found that point cloud registration is crucial, particularly in a low relief environment such as a braided river, for accurate change quantification and geomorphic interpretation. We conclude that, given a suitable set of images and considering principles of classical photogrammetric analysis, SfM methods can be effectively applied for archival imagery analysis, but that this is by no means a plug and play methodology.

  17. Suitability of low cost commercial off-the-shelf aerial platforms and consumer grade digital cameras for small format aerial photography

    NASA Astrophysics Data System (ADS)

    Turley, Anthony Allen

    Many research projects require the use of aerial images. Wetlands evaluation, crop monitoring, wildfire management, environmental change detection, and forest inventory are but a few of the applications of aerial imagery. Low altitude Small Format Aerial Photography (SFAP) is a bridge between satellite and man-carrying aircraft image acquisition and ground-based photography. The author's project evaluates digital images acquired using low cost commercial digital cameras and standard model airplanes to determine their suitability for remote sensing applications. Images from two different sites were obtained. Several photo missions were flown over each site, acquiring images in the visible and near infrared electromagnetic bands. Images were sorted and analyzed to select those with the least distortion, and blended together with Microsoft Image Composite Editor. By selecting images taken within minutes apart, radiometric qualities of the images were virtually identical, yielding no blend lines in the composites. A commercial image stitching program, Autopano Pro, was purchased during the later stages of this study. Autopano Pro was often able to mosaic photos that the free Image Composite Editor was unable to combine. Using telemetry data from an onboard data logger, images were evaluated to calculate scale and spatial resolution. ERDAS ER Mapper and ESRI ArcGIS were used to rectify composite images. Despite the limitations inherent in consumer grade equipment, images of high spatial resolution were obtained. Mosaics of as many as 38 images were created, and the author was able to record detailed aerial images of forest and wetland areas where foot travel was impractical or impossible.

  18. Astronomy education through hands-on photography workshops

    NASA Astrophysics Data System (ADS)

    Schofield, I.; Connors, M. G.; Holmberg, R.

    2013-12-01

    Athabasca University (AU), Athabasca University Geophysical and Geo-Space Observatories (AUGO / AUGSO), the Rotary Club of Athabasca and Science Outreach Athabasca has designed a three day science workshop entitled Photography and the Night Sky. This pilot workshop, aimed primarily at high-school aged students, serves as an introduction to observational astronomy as seen in the western Canadian night sky using digital astrophotography without the use of a telescope or tracking mount. Participants learn the layout of the night sky by proficiently photographing it using digital single lens reflex camera (DSLR) kits including telephoto and wide-angle lenses, tripod and cable release. The kits are assembled with entry-level consumer-grade camera gear as to be affordable by the participants, if they so desire to purchase their own equipment after the workshop. Basic digital photo editing is covered using free photo editing software (IrfanView). Students are given an overview of observational astronomy using interactive planetarium software (Stellarium) before heading outdoors to shoot the night sky. Photography is conducted at AU's auroral observatories, both of which possess dark open sky that is ideal for night sky viewing. If space weather conditions are favorable, there are opportunities to photograph the aurora borealis, then compare results with imagery generated by the all-sky auroral imagers located at the Geo-Space observatory. The aim of this program is to develop awareness to the science and beauty of the night sky, while promoting photography as a rewarding, lifelong hobby. Moreover, emphasis is placed on western Canada's unique subauroral location that makes aurora watching highly accessible and rewarding in 2013, the maximum of the current solar cycle.

  19. USGS QA Plan: Certification of digital airborne mapping products

    USGS Publications Warehouse

    Christopherson, J.

    2007-01-01

    To facilitate acceptance of new digital technologies in aerial imaging and mapping, the US Geological Survey (USGS) and its partners have launched a Quality Assurance (QA) Plan for Digital Aerial Imagery. This should provide a foundation for the quality of digital aerial imagery and products. It introduces broader considerations regarding processes employed by aerial flyers in collecting, processing and delivering data, and provides training and information for US producers and users alike.

  20. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  1. Next-generation digital camera integration and software development issues

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Peters, Ken; Hecht, Richard

    1998-04-01

    This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.

  2. Monitoring Seabirds and Marine Mammals by Georeferenced Aerial Photography

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Weidauer, A.; Coppack, T.

    2016-06-01

    The assessment of anthropogenic impacts on the marine environment is challenged by the accessibility, accuracy and validity of biogeographical information. Offshore wind farm projects require large-scale ecological surveys before, during and after construction, in order to assess potential effects on the distribution and abundance of protected species. The robustness of site-specific population estimates depends largely on the extent and design of spatial coverage and the accuracy of the applied census technique. Standard environmental assessment studies in Germany have so far included aerial visual surveys to evaluate potential impacts of offshore wind farms on seabirds and marine mammals. However, low flight altitudes, necessary for the visual classification of species, disturb sensitive bird species and also hold significant safety risks for the observers. Thus, aerial surveys based on high-resolution digital imagery, which can be carried out at higher (safer) flight altitudes (beyond the rotor-swept zone of the wind turbines) have become a mandatory requirement, technically solving the problem of distant-related observation bias. A purpose-assembled imagery system including medium-format cameras in conjunction with a dedicated geo-positioning platform delivers series of orthogonal digital images that meet the current technical requirements of authorities for surveying marine wildlife at a comparatively low cost. At a flight altitude of 425 m, a focal length of 110 mm, implemented forward motion compensation (FMC) and exposure times ranging between 1/1600 and 1/1000 s, the twin-camera system generates high quality 16 bit RGB images with a ground sampling distance (GSD) of 2 cm and an image footprint of 155 x 410 m. The image files are readily transferrable to a GIS environment for further editing, taking overlapping image areas and areas affected by glare into account. The imagery can be routinely screened by the human eye guided by purpose-programmed software to distinguish biological from non-biological signals. Each detected seabird or marine mammal signal is identified to species level or assigned to a species group and automatically saved into a geo-database for subsequent quality assurance, geo-statistical analyses and data export to third-party users. The relative size of a detected object can be accurately measured which provides key information for species-identification. During the development and testing of this system until 2015, more than 40 surveys have produced around 500.000 digital aerial images, of which some were taken in specially protected areas (SPA) of the Baltic Sea and thus include a wide range of relevant species. Here, we present the technical principles of this comparatively new survey approach and discuss the key methodological challenges related to optimizing survey design and workflow in view of the pending regulatory requirements for effective environmental impact assessments.

  3. Comparing automated classification and digitization approaches to detect change in eelgrass bed extent during restoration of a large river delta

    USGS Publications Warehouse

    Davenport, Anna Elizabeth; Davis, Jerry D.; Woo, Isa; Grossman, Eric; Barham, Jesse B.; Ellings, Christopher S.; Takekawa, John Y.

    2017-01-01

    Native eelgrass (Zostera marina) is an important contributor to ecosystem services that supplies cover for juvenile fish, supports a variety of invertebrate prey resources for fish and waterbirds, provides substrate for herring roe consumed by numerous fish and birds, helps stabilize sediment, and sequesters organic carbon. Seagrasses are in decline globally, and monitoring changes in their growth and extent is increasingly valuable to determine impacts from large-scale estuarine restoration and inform blue carbon mapping initiatives. Thus, we examined the efficacy of two remote sensing mapping methods with high-resolution (0.5 m pixel size) color near infrared imagery with ground validation to assess change following major tidal marsh restoration. Automated classification of false color aerial imagery and digitized polygons documented a slight decline in eelgrass area directly after restoration followed by an increase two years later. Classification of sparse and low to medium density eelgrass was confounded in areas with algal cover, however large dense patches of eelgrass were well delineated. Automated classification of aerial imagery from unsupervised and supervised methods provided reasonable accuracies of 73% and hand-digitizing polygons from the same imagery yielded similar results. Visual clues for hand digitizing from the high-resolution imagery provided as reliable a map of dense eelgrass extent as automated image classification. We found that automated classification had no advantages over manual digitization particularly because of the limitations of detecting eelgrass with only three bands of imagery and near infrared.

  4. Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.

    2016-12-01

    Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.

  5. Visual Odometry for Autonomous Deep-Space Navigation

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.

  6. Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.

    ERIC Educational Resources Information Center

    Mills, David A.; Kelley, Kevin; Jones, Michael

    2001-01-01

    Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)

  7. Digital Cameras for Student Use.

    ERIC Educational Resources Information Center

    Simpson, Carol

    1997-01-01

    Describes the features, equipment and operations of digital cameras and compares three different digital cameras for use in education. Price, technology requirements, features, transfer software, and accessories for the Kodak DC25, Olympus D-200L and Casio QV-100 are presented in a comparison table. (AEF)

  8. Vegetation Removal from Uav Derived Dsms, Using Combination of RGB and NIR Imagery

    NASA Astrophysics Data System (ADS)

    Skarlatos, D.; Vlachos, M.

    2018-05-01

    Current advancements on photogrammetric software along with affordability and wide spreading of Unmanned Aerial Vehicles (UAV), allow for rapid, timely and accurate 3D modelling and mapping of small to medium sized areas. Although the importance and applications of large format aerial overlaps cameras and photographs in Digital Surface Model (DSM) production and LIDAR data is well documented in literature, this is not the case for UAV photography. Additionally, the main disadvantage of photogrammetry is the inability to map the dead ground (terrain), when we deal with areas that include vegetation. This paper assesses the use of near-infrared imagery captured by small UAV platforms to automatically remove vegetation from Digital Surface Models (DSMs) and obtain a Digital Terrain Model (DTM). Two areas were tested, based on the availability of ground reference points, both under trees and among vegetation, as well as on terrain. In addition, RGB and near-infrared UAV photography was captured and processed using Structure from Motion (SfM) and Multi View Stereo (MVS) algorithms to generate DSMs and corresponding colour and NIR orthoimages with 0.2 m and 0.25 m as pixel size respectively for the two test sites. Moreover, orthophotos were used to eliminate the vegetation from the DSMs using NDVI index, thresholding and masking. Following that, different interpolation algorithms, according to the test sites, were applied to fill in the gaps and created DTMs. Finally, a statistic analysis was made using reference terrain points captured on field, both on dead ground and under vegetation to evaluate the accuracy of the whole process and assess the overall accuracy of the derived DTMs in contrast with the DSMs.

  9. Using High Spatial Resolution Digital Imagery

    DTIC Science & Technology

    2005-02-01

    digital base maps were high resolution U.S. Geological Survey (USGS) Digital Orthophoto Quarter Quadrangles (DOQQ). The Root Mean Square Errors (RMSE...next step was to assign real world coordinates to the linear im- age. The mosaics were geometrically registered to the panchromatic orthophotos ...useable thematic map from high-resolution imagery. A more practical approach may be to divide the Refuge into a set of smaller areas, or tiles

  10. Camera Ready: Capturing a Digital History of Chester

    ERIC Educational Resources Information Center

    Lehman, Kathy

    2008-01-01

    Armed with digital cameras, voice recorders, and movie cameras, students from Thomas Dale High School in Chester, Virginia, have been exploring neighborhoods, interviewing residents, and collecting memories of their hometown. In this article, the author describes "Digital History of Chester", a project for creating a commemorative DVD.…

  11. Photogrammetry on glaciers: Old and new knowledge

    NASA Astrophysics Data System (ADS)

    Pfeffer, W. T.; Welty, E.; O'Neel, S.

    2014-12-01

    In the past few decades terrestrial photogrammetry has become a widely used tool for glaciological research, brought about in part by the proliferation of high-quality, low-cost digital cameras, dramatic increases in image-processing power of computers, and very innovative progress in image processing, much of which has come from computer vision research and from the computer gaming industry. At present, glaciologists have developed their capacity to gather images much further than their ability to process them. Many researchers have accumulated vast inventories of imagery, but have no efficient means to extract the data they desire from them. In many cases these are single-image time series where the processing limitation lies in the paucity of methods to obtain 3-dimension object space information from measurements in the 2-dimensional image space; in other cases camera pairs have been operated but no automated means is in hand for conventional stereometric analysis of many thousands of image pairs. Often the processing task is further complicated by weak camera geometry or ground control distribution, either of which will compromise the quality of 3-dimensional object space solutions. Solutions exist for many of these problems, found sometimes among the latest computer vision results, and sometimes buried in decades-old pre-digital terrestrial photogrammetric literature. Other problems, particularly those arising from poorly constrained or underdetermined camera and ground control geometry, may be unsolvable. Small-scale, ground-based photography and photogrammetry of glaciers has grown over the past few decades in an organic and disorganized fashion, with much duplication of effort and little coordination or sharing of knowledge among researchers. Given the utility of terrestrial photogrammetry, its low cost (if properly developed and implemented), and the substantial value of the information to be had from it, some further effort to share knowledge and methods would be a great benefit for the community. We consider some of the main problems to be solved, and aspects of how optimal knowledge sharing might be accomplished.

  12. Potential and Limitations of Low-Cost Unmanned Aerial Systems for Monitoring Altitudinal Vegetation Phenology in the Tropics

    NASA Astrophysics Data System (ADS)

    Silva, T. S. F.; Torres, R. S.; Morellato, P.

    2017-12-01

    Vegetation phenology is a key component of ecosystem function and biogeochemical cycling, and highly susceptible to climatic change. Phenological knowledge in the tropics is limited by lack of monitoring, traditionally done by laborious direct observation. Ground-based digital cameras can automate daily observations, but also offer limited spatial coverage. Imaging by low-cost Unmanned Aerial Systems (UAS) combines the fine resolution of ground-based methods with and unprecedented capability for spatial coverage, but challenges remain in producing color-consistent multitemporal images. We evaluated the applicability of multitemporal UAS imaging to monitor phenology in tropical altitudinal grasslands and forests, answering: 1) Can very-high resolution aerial photography from conventional digital cameras be used to reliably monitor vegetative and reproductive phenology? 2) How is UAS monitoring affected by changes in illumination and by sensor physical limitations? We flew imaging missions monthly from Feb-16 to Feb-17, using a UAS equipped with an RGB Canon SX260 camera. Flights were carried between 10am and 4pm, at 120-150m a.g.l., yielding 5-10cm spatial resolution. To compensate illumination changes caused by time of day, season and cloud cover, calibration was attempted using reference targets and empirical models, as well as color space transformations. For vegetative phenological monitoring, multitemporal response was severely affected by changes in illumination conditions, strongly confounding the phenological signal. These variations could not be adequately corrected through calibration due to sensor limitations. For reproductive phenology, the very-high resolution of the acquired imagery allowed discrimination of individual reproductive structures for some species, and its stark colorimetric differences to vegetative structures allowed detection of the reproductive timing on the HSV color space, despite illumination effects. We conclude that reliable vegetative phenology monitoring may exceed the capabilities of consumer cameras, but reproductive phenology can be successfully monitored for species with conspicuous reproductive structures. Further research is being conducted to improve calibration methods and information extraction through machine learning.

  13. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  14. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  15. Extracting Plant Phenology Metrics in a Great Basin Watershed: Methods and Considerations for Quantifying Phenophases in a Cold Desert.

    PubMed

    Snyder, Keirith A; Wehan, Bryce L; Filippa, Gianluca; Huntington, Justin L; Stringham, Tamzen K; Snyder, Devon K

    2016-11-18

    Plant phenology is recognized as important for ecological dynamics. There has been a recent advent of phenology and camera networks worldwide. The established PhenoCam Network has sites in the United States, including the western states. However, there is a paucity of published research from semi-arid regions. In this study, we demonstrate the utility of camera-based repeat digital imagery and use of R statistical phenopix package to quantify plant phenology and phenophases in four plant communities in the semi-arid cold desert region of the Great Basin. We developed an automated variable snow/night filter for removing ephemeral snow events, which allowed fitting of phenophases with a double logistic algorithm. We were able to detect low amplitude seasonal variation in pinyon and juniper canopies and sagebrush steppe, and characterize wet and mesic meadows in area-averaged analyses. We used individual pixel-based spatial analyses to separate sagebrush shrub canopy pixels from interspace by determining differences in phenophases of sagebrush relative to interspace. The ability to monitor plant phenology with camera-based images fills spatial and temporal gaps in remotely sensed data and field based surveys, allowing species level relationships between environmental variables and phenology to be developed on a fine time scale thus providing powerful new tools for land management.

  16. Photogrammetric Measurements in Fixed Wing Uav Imagery

    NASA Astrophysics Data System (ADS)

    Gülch, E.

    2012-07-01

    Several flights have been undertaken with PAMS (Photogrammetric Aerial Mapping System) by Germap, Germany, which is briefly introduced. This system is based on the SmartPlane fixed-wing UAV and a CANON IXUS camera system. The plane is equipped with GPS and has an infrared sensor system to estimate attitude values. A software has been developed to link the PAMS output to a standard photogrammetric processing chain built on Trimble INPHO. The linking of the image files and image IDs and the handling of different cases with partly corrupted output have to be solved to generate an INPHO project file. Based on this project file the software packages MATCH-AT, MATCH-T DSM, OrthoMaster and OrthoVista for digital aerial triangulation, DTM/DSM generation and finally digital orthomosaik generation are applied. The focus has been on investigations on how to adapt the "usual" parameters for the digital aerial triangulation and other software to the UAV flight conditions, which are showing high overlaps, large kappa angles and a certain image blur in case of turbulences. It was found, that the selected parameter setup shows a quite stable behaviour and can be applied to other flights. A comparison is made to results from other open source multi-ray matching software to handle the issue of the described flight conditions. Flights over the same area at different times have been compared to each other. The major objective was here to see, on how far differences occur relative to each other, without having access to ground control data, which would have a potential for applications with low requirements on the absolute accuracy. The results show, that there are influences of weather and illumination visible. The "unusual" flight pattern, which shows big time differences for neighbouring strips has an influence on the AT and DTM/DSM generation. The results obtained so far do indicate problems in the stability of the camera calibration. This clearly requests a usage of GCPs for all projects, independent on the application. The effort is estimated to be even higher as expected, as also self-calibration will be an issue to handle a possibly instable camera calibration. To overcome some of the encountered problems with the very specific features of UAV flights a software UAVision was developed based on Open Source libraries to produce input data for bundle adjustment of UAV images by PAMS. The empirical test results show a considerable improvement in the matching of tie points. The results do, however, show that the Open Source bundle adjustment was not applicable to this type of imagery. This still leaves the possibility to use the improved tie point correspondences in the commercial AT package.

  17. Application of high resolution images from unmanned aerial vehicles for hydrology and rangeland science

    NASA Astrophysics Data System (ADS)

    Rango, A.; Vivoni, E. R.; Anderson, C. A.; Perini, N. A.; Saripalli, S.; Laliberte, A.

    2012-12-01

    A common problem in many natural resource disciplines is the lack of high-enough spatial resolution images that can be used for monitoring and modeling purposes. Advances have been made in the utilization of Unmanned Aerial Vehicles (UAVs) in hydrology and rangeland science. By utilizing low flight altitudes and velocities, UAVs are able to produce high resolution (5 cm) images as well as stereo coverage (with 75% forward overlap and 40% sidelap) to extract digital elevation models (DEM). Another advantage of flying at low altitude is that the potential problems of atmospheric haze obscuration are eliminated. Both small fixed-wing and rotary-wing aircraft have been used in our experiments over two rangeland areas in the Jornada Experimental Range in southern New Mexico and the Santa Rita Experimental Range in southern Arizona. The fixed-wing UAV has a digital camera in the wing and six-band multispectral camera in the nose, while the rotary-wing UAV carries a digital camera as payload. Because we have been acquiring imagery for several years, there are now > 31,000 photos at one of the study sites, and 177 mosaics over rangeland areas have been constructed. Using the DEM obtained from the imagery we have determined the actual catchment areas of three watersheds and compared these to previous estimates. At one site, the UAV-derived watershed area is 4.67 ha which is 22% smaller compared to a manual survey using a GPS unit obtained several years ago. This difference can be significant in constructing a watershed model of the site. From a vegetation species classification, we also determined that two of the shrub types in this small watershed(mesquite and creosote with 6.47 % and 5.82% cover, respectively) grow in similar locations(flat upland areas with deep soils), whereas the most predominant shrub(mariola with 11.9% cover) inhabits hillslopes near stream channels(with steep shallow soils). The positioning of these individual shrubs throughout the catchment using UAV image classifications is required as input to detailed watershed modeling There are multiple advantages to UAVs for use in hydrology and rangeland science, including that coverage is less expensive while just as accurate as conventional ground measurements. The UAV guidance systems can also guarantee returning to the same location for change detection analysis. UAV capabilities also have advantages over manned aircraft because they are safer, less expensive, and can respond in a timelier manner to new flight requests. As a result, the use of UAVs for watershed and rangeland monitoring and modeling is a rapidly expanding civil application in natural resources.

  18. Geomorphological mapping with a small unmanned aircraft system (sUAS): Feature detection and accuracy assessment of a photogrammetrically-derived digital terrain model

    NASA Astrophysics Data System (ADS)

    Hugenholtz, Chris H.; Whitehead, Ken; Brown, Owen W.; Barchyn, Thomas E.; Moorman, Brian J.; LeClair, Adam; Riddell, Kevin; Hamilton, Tayler

    2013-07-01

    Small unmanned aircraft systems (sUAS) are a relatively new type of aerial platform for acquiring high-resolution remote sensing measurements of Earth surface processes and landforms. However, despite growing application there has been little quantitative assessment of sUAS performance. Here we present results from a field experiment designed to evaluate the accuracy of a photogrammetrically-derived digital terrain model (DTM) developed from imagery acquired with a low-cost digital camera onboard an sUAS. We also show the utility of the high-resolution (0.1 m) sUAS imagery for resolving small-scale biogeomorphic features. The experiment was conducted in an area with active and stabilized aeolian landforms in the southern Canadian Prairies. Images were acquired with a Hawkeye RQ-84Z Areohawk fixed-wing sUAS. A total of 280 images were acquired along 14 flight lines, covering an area of 1.95 km2. The survey was completed in 4.5 h, including GPS surveying, sUAS setup and flight time. Standard image processing and photogrammetric techniques were used to produce a 1 m resolution DTM and a 0.1 m resolution orthorectified image mosaic. The latter revealed previously un-mapped bioturbation features. The vertical accuracy of the DTM was evaluated with 99 Real-Time Kinematic GPS points, while 20 of these points were used to quantify horizontal accuracy. The horizontal root mean squared error (RMSE) of the orthoimage was 0.18 m, while the vertical RMSE of the DTM was 0.29 m, which is equivalent to the RMSE of a bare earth LiDAR DTM for the same site. The combined error from both datasets was used to define a threshold of the minimum elevation difference that could be reliably attributed to erosion or deposition in the seven years separating the sUAS and LiDAR datasets. Overall, our results suggest that sUAS-acquired imagery may provide a low-cost, rapid, and flexible alternative to airborne LiDAR for geomorphological mapping.

  19. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  20. Photogrammetry and Remote Sensing: New German Standards (din) Setting Quality Requirements of Products Generated by Digital Cameras, Pan-Sharpening and Classification

    NASA Astrophysics Data System (ADS)

    Reulke, R.; Baltrusch, S.; Brunn, A.; Komp, K.; Kresse, W.; von Schönermark, M.; Spreckels, V.

    2012-08-01

    10 years after the first introduction of a digital airborne mapping camera in the ISPRS conference 2000 in Amsterdam, several digital cameras are now available. They are well established in the market and have replaced the analogue camera. A general improvement in image quality accompanied the digital camera development. The signal-to-noise ratio and the dynamic range are significantly better than with the analogue cameras. In addition, digital cameras can be spectrally and radiometrically calibrated. The use of these cameras required a rethinking in many places though. New data products were introduced. In the recent years, some activities took place that should lead to a better understanding of the cameras and the data produced by these cameras. Several projects, like the projects of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) or EuroSDR (European Spatial Data Research), were conducted to test and compare the performance of the different cameras. In this paper the current DIN (Deutsches Institut fuer Normung - German Institute for Standardization) standards will be presented. These include the standard for digital cameras, the standard for ortho rectification, the standard for classification, and the standard for pan-sharpening. In addition, standards for the derivation of elevation models, the use of Radar / SAR, and image quality are in preparation. The OGC has indicated its interest in participating that development. The OGC has already published specifications in the field of photogrammetry and remote sensing. One goal of joint future work could be to merge these formerly independent developments and the joint development of a suite of implementation specifications for photogrammetry and remote sensing.

  1. Mapping the Riverscape of the Middle Fork John Day River with Structure-from-Motion

    NASA Astrophysics Data System (ADS)

    Dietrich, J. T.

    2014-12-01

    Aerial photography has proven an efficient method to collect a wide range of continuous variables for large sections of rivers. These data include variables such as the planimetric shape, low-flow and bank-full widths, bathymetry, and sediment sizes. Mapping these variables in a continuous manner allows us to explore the heterogeneity of the river and build a more complete picture of the holistic riverscape. To explore a low-cost option for aerial photography and riverscape mapping, I used the combination of a piloted helicopter and an off-the-shelf digital SLR camera to collect aerial imagery for a 32 km segment of the Middle Fork John Day River in eastern Oregon. This imagery was processed with Structure-from-Motion (SfM) photogrammetry to produce high-resolution 10 cm orthophotos and digital surface models that were used to extract riverscape variables. The Middle Fork John Day River is an important spawning river for anadromous Chinnook and Steelhead and has been the focus of widespread restoration and conservation activities in response to the legacies of extensive grazing and mining activity. By mapping the riverscape of the Middle Fork John Day, I explored downstream relationships between several geomorphic variables with hyperscale analysis. These riverscape data also provided an opportunity to make a continuous map of habitat suitability for migrating adult Chinook. Both the geomorphic and habitat suitability analysis provide an important assessment of the natural variation in the river and the impact of human modification, both positive and negative.

  2. Rapid dispersal of saltcedar (Tamarix spp.) biocontrol beetles (Diorhabda carinulata) on a desert river detected by phenocams, MODIS imagery and ground observations

    USGS Publications Warehouse

    Nagler, Pamela L.; Pearlstein, Susanna; Glenn, Edward P.; Brown, Tim B.; Bateman, Heather L.; Bean, Dan W.; Hultine, Kevin R.

    2013-01-01

    We measured the rate of dispersal of saltcedar leaf beetles (Diorhabda carinulata), a defoliating insect released on western rivers to control saltcedar shrubs (Tamarix spp.), on a 63 km reach of the Virgin River, U.S. Dispersal was measured by satellite imagery, ground surveys and phenocams. Pixels from the Moderate Resolution Imaging Spectrometer (MODIS) sensors on the Terra satellite showed a sharp drop in NDVI in midsummer followed by recovery, correlated with defoliation events as revealed in networked digital camera images and ground surveys. Ground surveys and MODIS imagery showed that beetle damage progressed downstream at a rate of about 25 km yr−1 in 2010 and 2011, producing a 50% reduction in saltcedar leaf area index and evapotranspiration by 2012, as estimated by algorithms based on MODIS Enhanced Vegetation Index values and local meteorological data for Mesquite, Nevada. This reduction is the equivalent of 10.4% of mean annual river flows on this river reach. Our results confirm other observations that saltcedar beetles are dispersing much faster than originally predicted in pre-release biological assessments, presenting new challenges and opportunities for land, water and wildlife managers on western rivers. Despite relatively coarse resolution (250 m) and gridding artifacts, single MODIS pixels can be useful in tracking the effects of defoliating insects in riparian corridors.

  3. Aspects of Voyager photogrammetry

    NASA Technical Reports Server (NTRS)

    Wu, Sherman S. C.; Schafer, Francis J.; Jordan, Raymond; Howington, Annie-Elpis

    1987-01-01

    In January 1986, Voyager 2 took a series of pictures of Uranus and its satellites with the Imaging Science System (ISS) on board the spacecraft. Based on six stereo images from the ISS narrow-angle camera, a topographic map was compiled of the Southern Hemisphere of Miranda, one of Uranus' moons. Assuming a spherical figure, a 20-km surface relief is shown on the map. With three additional images from the ISS wide-angle camera, a control network of Miranda's Southern Hemisphere was established by analytical photogrammetry, producing 88 ground points for the control of multiple-model compilation on the AS-11AM analytical stereoplotter. Digital terrain data from the topographic map of Miranda have also been produced. By combining these data and the image data from the Voyager 2 mission, perspective views or even a movie of the mapped area can be made. The application of these newly developed techniques to Voyager 1 imagery, which includes a few overlapping pictures of Io and Ganymede, permits the compilation of contour maps or topographic profiles of these bodies on the analytical stereoplotters.

  4. Mapping broom snakeweed through image analysis of color-infrared photography and digital imagery.

    PubMed

    Everitt, J H; Yang, C

    2007-11-01

    A study was conducted on a south Texas rangeland area to evaluate aerial color-infrared (CIR) photography and CIR digital imagery combined with unsupervised image analysis techniques to map broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby]. Accuracy assessments performed on computer-classified maps of photographic images from two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 88.3%, respectively; whereas, accuracy assessments performed on classified maps from digital images of the same two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 92.8%, respectively. These results indicate that CIR photography and CIR digital imagery combined with image analysis techniques can be used successfully to map broom snakeweed infestations on south Texas rangelands.

  5. Single chip camera active pixel sensor

    NASA Technical Reports Server (NTRS)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  6. Observation of wave celerity evolution in the nearshore using digital video imagery

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.

    2008-12-01

    Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.

  7. Selecting the right digital camera for telemedicine-choice for 2009.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart; Brudzinski, Jay; Spargo, Garret

    2010-03-01

    Digital cameras are fundamental tools for store-and-forward telemedicine (electronic consultation). The choice of a camera may significantly impact this consultative process based on the quality of the images, the ability of users to leverage the cameras' features, and other facets of the camera design. The goal of this research was to provide a substantive framework and clearly defined process for reviewing digital cameras and to demonstrate the results obtained when employing this process to review point-and-shoot digital cameras introduced in 2009. The process included a market review, in-house evaluation of features, image reviews, functional testing, and feature prioritization. Seventy-two cameras were identified new on the market in 2009, and 10 were chosen for in-house evaluation. Four cameras scored very high for mechanical functionality and ease-of-use. The final analysis revealed three cameras that had excellent scores for both color accuracy and photographic detail and these represent excellent options for telemedicine: Canon Powershot SD970 IS, Fujifilm FinePix F200EXR, and Panasonic Lumix DMC-ZS3. Additional features of the Canon Powershot SD970 IS make it the camera of choice for our Alaska program.

  8. Using Digital Imaging in Classroom and Outdoor Activities.

    ERIC Educational Resources Information Center

    Thomasson, Joseph R.

    2002-01-01

    Explains how to use digital cameras and related basic equipment during indoor and outdoor activities. Uses digital imaging in general botany class to identify unknown fungus samples. Explains how to select a digital camera and other necessary equipment. (YDS)

  9. Issues in implementing services for a wireless web-enabled digital camera

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas

    2001-05-01

    The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.

  10. Voss with video camera in Service Module

    NASA Image and Video Library

    2001-04-08

    ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.

  11. Spectral colors capture and reproduction based on digital camera

    NASA Astrophysics Data System (ADS)

    Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang

    2018-01-01

    The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.

  12. Precision measurements from very-large scale aerial digital imagery.

    PubMed

    Booth, D Terrance; Cox, Samuel E; Berryman, Robert D

    2006-01-01

    Managers need measurements and resource managers need the length/width of a variety of items including that of animals, logs, streams, plant canopies, man-made objects, riparian habitat, vegetation patches and other things important in resource monitoring and land inspection. These types of measurements can now be easily and accurately obtained from very large scale aerial (VLSA) imagery having spatial resolutions as fine as 1 millimeter per pixel by using the three new software programs described here. VLSA images have small fields of view and are used for intermittent sampling across extensive landscapes. Pixel-coverage among images is influenced by small changes in airplane altitude above ground level (AGL) and orientation relative to the ground, as well as by changes in topography. These factors affect the object-to-camera distance used for image-resolution calculations. 'ImageMeasurement' offers a user-friendly interface for accounting for pixel-coverage variation among images by utilizing a database. 'LaserLOG' records and displays airplane altitude AGL measured from a high frequency laser rangefinder, and displays the vertical velocity. 'Merge' sorts through large amounts of data generated by LaserLOG and matches precise airplane altitudes with camera trigger times for input to the ImageMeasurement database. We discuss application of these tools, including error estimates. We found measurements from aerial images (collection resolution: 5-26 mm/pixel as projected on the ground) using ImageMeasurement, LaserLOG, and Merge, were accurate to centimeters with an error less than 10%. We recommend these software packages as a means for expanding the utility of aerial image data.

  13. Cameras and settings for optimal image capture from UAVs

    NASA Astrophysics Data System (ADS)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  14. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  15. Synthesis of optical polarization signatures of military aircraft

    NASA Astrophysics Data System (ADS)

    Egan, Walter G.; Duggin, Michael J.

    2002-01-01

    Focal plane wide band IR imagery will be compared with visual wide band focal plane digital imagery of a camouflaged B-52 bomber. Extreme enhancement is possible using digital polarized imagery. The experimental observations will be compared to theoretical calculations and modeling result of both specular and shadowed areas to allow extrapolations to the synthesis of the optical polarization signatures of other aircraft. The relationship of both the specular and the shadowed areas to surface structure, orientation, specularlity, roughness, shadowing and the complex index of refraction will be illustrated. The imagery was obtained in two plane-polarized directions. Many aircraft locations were measured as well as sky background.

  16. Monitoring Kilauea Volcano Using Non-Telemetered Time-Lapse Camera Systems

    NASA Astrophysics Data System (ADS)

    Orr, T. R.; Hoblitt, R. P.

    2006-12-01

    Systematic visual observations are an essential component of monitoring volcanic activity. At the Hawaiian Volcano Observatory, the development and deployment of a new generation of high-resolution, non- telemetered, time-lapse camera systems provides periodic visual observations in inaccessible and hazardous environments. The camera systems combine a hand-held digital camera, programmable shutter-release, and other off-the-shelf components in a package that is inexpensive, easy to deploy, and ideal for situations in which the probability of equipment loss due to volcanic activity or theft is substantial. The camera systems have proven invaluable in correlating eruptive activity with deformation and seismic data streams. For example, in late 2005 and much of 2006, Pu`u `O`o, the active vent on Kilauea Volcano`s East Rift Zone, experienced 10--20-hour cycles of inflation and deflation that correlated with increases in seismic energy release. A time-lapse camera looking into a skylight above the main lava tube about 1 km south of the vent showed an increase in lava level---an indicator of increased lava flux---during periods of deflation, and a decrease in lava level during periods of inflation. A second time-lapse camera, with a broad view of the upper part of the active flow field, allowed us to correlate the same cyclic tilt and seismicity with lava breakouts from the tube. The breakouts were accompanied by rapid uplift and subsidence of shatter rings over the tube. The shatter rings---concentric rings of broken rock---rose and subsided by as much as 6 m in less than an hour during periods of varying flux. Time-lapse imagery also permits improved assessment of volcanic hazards, and is invaluable in illustrating the hazards to the public. In collaboration with Hawaii Volcanoes National Park, camera systems have been used to monitor the growth of lava deltas at the entry point of lava into the ocean to determine the potential for catastrophic collapse.

  17. Early forest fire detection using principal component analysis of infrared video

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Radjabi, Ryan; Jacobs, John T.

    2011-09-01

    A land-based early forest fire detection scheme which exploits the infrared (IR) temporal signature of fire plume is described. Unlike common land-based and/or satellite-based techniques which rely on measurement and discrimination of fire plume directly from its infrared and/or visible reflectance imagery, this scheme is based on exploitation of fire plume temporal signature, i.e., temperature fluctuations over the observation period. The method is simple and relatively inexpensive to implement. The false alarm rate is expected to be lower that of the existing methods. Land-based infrared (IR) cameras are installed in a step-stare-mode configuration in potential fire-prone areas. The sequence of IR video frames from each camera is digitally processed to determine if there is a fire within camera's field of view (FOV). The process involves applying a principal component transformation (PCT) to each nonoverlapping sequence of video frames from the camera to produce a corresponding sequence of temporally-uncorrelated principal component (PC) images. Since pixels that form a fire plume exhibit statistically similar temporal variation (i.e., have a unique temporal signature), PCT conveniently renders the footprint/trace of the fire plume in low-order PC images. The PC image which best reveals the trace of the fire plume is then selected and spatially filtered via simple threshold and median filter operations to remove the background clutter, such as traces of moving tree branches due to wind.

  18. Center for Coastline Security Technology, Year 3

    DTIC Science & Technology

    2008-05-01

    Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection

  19. Sensor fusion and augmented reality with the SAFIRE system

    NASA Astrophysics Data System (ADS)

    Saponaro, Philip; Treible, Wayne; Phelan, Brian; Sherbondy, Kelly; Kambhamettu, Chandra

    2018-04-01

    The Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) mobile radar system was developed and exercised at an arid U.S. test site. The system can detect hidden target using radar, a global positioning system (GPS), dual stereo color cameras, and dual stereo thermal cameras. An Augmented Reality (AR) software interface allows the user to see a single fused video stream containing the SAR, color, and thermal imagery. The stereo sensors allow the AR system to display both fused 2D imagery and 3D metric reconstructions, where the user can "fly" around the 3D model and switch between the modalities.

  20. Study and validation of tools interoperability in JPSEC

    NASA Astrophysics Data System (ADS)

    Conan, V.; Sadourny, Y.; Jean-Marie, K.; Chan, C.; Wee, S.; Apostolopoulos, J.

    2005-08-01

    Digital imagery is important in many applications today, and the security of digital imagery is important today and is likely to gain in importance in the near future. The emerging international standard ISO/IEC JPEG-2000 Security (JPSEC) is designed to provide security for digital imagery, and in particular digital imagery coded with the JPEG-2000 image coding standard. One of the primary goals of a standard is to ensure interoperability between creators and consumers produced by different manufacturers. The JPSEC standard, similar to the popular JPEG and MPEG family of standards, specifies only the bitstream syntax and the receiver's processing, and not how the bitstream is created or the details of how it is consumed. This paper examines the interoperability for the JPSEC standard, and presents an example JPSEC consumption process which can provide insights in the design of JPSEC consumers. Initial interoperability tests between different groups with independently created implementations of JPSEC creators and consumers have been successful in providing the JPSEC security services of confidentiality (via encryption) and authentication (via message authentication codes, or MACs). Further interoperability work is on-going.

  1. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  2. Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.

    PubMed

    Porch, Timothy G; Erpelding, John E

    2006-04-30

    A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.

  3. Imagers for digital still photography

    NASA Astrophysics Data System (ADS)

    Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge

    2006-04-01

    This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.

  4. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  5. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  6. Training site statistics from Landsat and Seasat satellite imagery registered to a common map base

    NASA Technical Reports Server (NTRS)

    Clark, J.

    1981-01-01

    Landsat and Seasat satellite imagery and training site boundary coordinates were registered to a common Universal Transverse Mercator map base in the Newport Beach area of Orange County, California. The purpose was to establish a spatially-registered, multi-sensor data base which would test the use of Seasat synthetic aperture radar imagery to improve spectral separability of channels used for land use classification of an urban area. Digital image processing techniques originally developed for the digital mosaics of the California Desert and the State of Arizona were adapted to spatially register multispectral and radar data. Techniques included control point selection from imagery and USGS topographic quadrangle maps, control point cataloguing with the Image Based Information System, and spatial and spectral rectifications of the imagery. The radar imagery was pre-processed to reduce its tendency toward uniform data distributions, so that training site statistics for selected Landsat and pre-processed Seasat imagery indicated good spectral separation between channels.

  7. Automatic source camera identification using the intrinsic lens radial distortion

    NASA Astrophysics Data System (ADS)

    Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.

    2006-11-01

    Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

  8. An Undergraduate Endeavor: Assembling a Live Planetarium Show About Mars

    NASA Astrophysics Data System (ADS)

    McGraw, Allison M.

    2016-10-01

    Viewing the mysterious red planet Mars goes back thousands of years with just the human eye but in more recent years the growth of telescopes, satellites and lander missions unveil unrivaled detail of the Martian surface that tells a story worth listening to. This planetarium show will go through the observations starting with the ancients to current understandings of the Martian surface, atmosphere and inner-workings through past and current Mars missions. Visual animations of its planetary motions, display of high resolution images from the Hi-RISE (High Resolution Imaging Science Experiment) and CTX (Context Camera) data imagery aboard the MRO (Mars Reconnaissance Orbiter) as well as other datasets will be used to display the terrain detail and imagery of the planet Mars with a digital projection system. Local planetary scientists and Mars specialists from the Lunar and Planetary Lab at the University of Arizona (Tucson, AZ) will be interviewed and used in the show to highlight current technology and understandings of the red planet. This is an undergraduate project that is looking for collaborations and insight in order gain structure in script writing that will teach about this planetary body to all ages in the format of a live planetarium show.

  9. Improved quantification of mountain snowpack properties using observations from Unmanned Air Vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    Shea, J. M.; Harder, P.; Pomeroy, J. W.; Kraaijenbrink, P. D. A.

    2017-12-01

    Mountain snowpacks represent a critical seasonal reservoir of water for downstream needs, and snowmelt is a significant component of mountain hydrological budgets. Ground-based point measurements are unable to describe the full spatial variability of snow accumulation and melt rates, and repeat Unmanned Air Vehicle (UAV) surveys provide an unparalleled opportunity to measure snow accumulation, redistribution and melt in alpine environments. This study presents results from a UAV-based observation campaign conducted at the Fortress Mountain Snow Laboratory in the Canadian Rockies in 2017. Seven survey flights were conducted between April (maximum snow accumulation) and mid-July (bare ground) to collect imagery with both an RGB camera and thermal infrared imager with the sensefly eBee RTK platform. UAV imagery are processed with structure from motion techniques, and orthoimages, digital elevation models, and surface temperature maps are validated against concurrent ground observations of snow depth, snow water equivalent, and snow surface temperature. We examine the seasonal evolution of snow depth and snow surface temperature, and explore the spatial covariances of these variables with respect to topographic factors and snow ablation rates. Our results have direct implications for scaling snow ablation calculations and model resolution and discretization.

  10. Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland

    NASA Astrophysics Data System (ADS)

    Lu, Bing; He, Yuhong

    2017-06-01

    Investigating spatio-temporal variations of species composition in grassland is an essential step in evaluating grassland health conditions, understanding the evolutionary processes of the local ecosystem, and developing grassland management strategies. Space-borne remote sensing images (e.g., MODIS, Landsat, and Quickbird) with spatial resolutions varying from less than 1 m to 500 m have been widely applied for vegetation species classification at spatial scales from community to regional levels. However, the spatial resolutions of these images are not fine enough to investigate grassland species composition, since grass species are generally small in size and highly mixed, and vegetation cover is greatly heterogeneous. Unmanned Aerial Vehicle (UAV) as an emerging remote sensing platform offers a unique ability to acquire imagery at very high spatial resolution (centimetres). Compared to satellites or airplanes, UAVs can be deployed quickly and repeatedly, and are less limited by weather conditions, facilitating advantageous temporal studies. In this study, we utilize an octocopter, on which we mounted a modified digital camera (with near-infrared (NIR), green, and blue bands), to investigate species composition in a tall grassland in Ontario, Canada. Seven flight missions were conducted during the growing season (April to December) in 2015 to detect seasonal variations, and four of them were selected in this study to investigate the spatio-temporal variations of species composition. To quantitatively compare images acquired at different times, we establish a processing flow of UAV-acquired imagery, focusing on imagery quality evaluation and radiometric correction. The corrected imagery is then applied to an object-based species classification. Maps of species distribution are subsequently used for a spatio-temporal change analysis. Results indicate that UAV-acquired imagery is an incomparable data source for studying fine-scale grassland species composition, owing to its high spatial resolution. The overall accuracy is around 85% for images acquired at different times. Species composition is spatially attributed by topographical features and soil moisture conditions. Spatio-temporal variation of species composition implies the growing process and succession of different species, which is critical for understanding the evolutionary features of grassland ecosystems. Strengths and challenges of applying UAV-acquired imagery for vegetation studies are summarized at the end.

  11. First results from the TOPSAT camera

    NASA Astrophysics Data System (ADS)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  12. Automatic Spatio-Temporal Flow Velocity Measurement in Small Rivers Using Thermal Image Sequences

    NASA Astrophysics Data System (ADS)

    Lin, D.; Eltner, A.; Sardemann, H.; Maas, H.-G.

    2018-05-01

    An automatic spatio-temporal flow velocity measurement approach, using an uncooled thermal camera, is proposed in this paper. The basic principle of the method is to track visible thermal features at the water surface in thermal camera image sequences. Radiometric and geometric calibrations are firstly implemented to remove vignetting effects in thermal imagery and to get the interior orientation parameters of the camera. An object-based unsupervised classification approach is then applied to detect the interest regions for data referencing and thermal feature tracking. Subsequently, GCPs are extracted to orient the river image sequences and local hot points are identified as tracking features. Afterwards, accurate dense tracking outputs are obtained using pyramidal Lucas-Kanade method. To validate the accuracy potential of the method, measurements obtained from thermal feature tracking are compared with reference measurements taken by a propeller gauge. Results show a great potential of automatic flow velocity measurement in small rivers using imagery from a thermal camera.

  13. Making Connections with Digital Data

    ERIC Educational Resources Information Center

    Leonard, William; Bassett, Rick; Clinger, Alicia; Edmondson, Elizabeth; Horton, Robert

    2004-01-01

    State-of-the-art digital cameras open up enormous possibilities in the science classroom, especially when used as data collectors. Because most high school students are not fully formal thinkers, the digital camera can provide a much richer learning experience than traditional observation. Data taken through digital images can make the…

  14. Application of ERTS-1 imagery to land use, forest density and soil investigations in Greece

    NASA Technical Reports Server (NTRS)

    Yassoglou, N. J.; Skordalakis, E.; Koutalos, A.

    1974-01-01

    Photographic and digital imagery received from ERTS-1 was analyzed and evaluated as to its usefulness for the assessment of agricultural and forest land resources. Black and white, and color composite imagery provided spectral and spatial data, which, when matched with temporal land information, provided the basis for a semidetailed land use and forest site evaluation cartography. Color composite photographs have provided some information on the status of irrigation of agricultural lands. Computer processed digital imagery was successfully used for detailed crop classification and semidetailed soil evaluation. The results and techniques of this investigation are applicable to ecological and geological conditions similar to those prevailing in the Eastern Mediterranean.

  15. Use of passive UAS imaging to measure biophysical parameters in a southern Rocky Mountain subalpine forest

    NASA Astrophysics Data System (ADS)

    Caldwell, M. K.; Sloan, J.; Mladinich, C. S.; Wessman, C. A.

    2013-12-01

    Unmanned Aerial Systems (UAS) can provide detailed, fine spatial resolution imagery for ecological uses not otherwise obtainable through standard methods. The use of UAS imagery for ecology is a rapidly -evolving field, where the study of forest landscape ecology can be augmented using UAS imagery to scale and validate biophysical data from field measurements to spaceborne observations. High resolution imagery provided by UAS (30 cm2 pixels) offers detailed canopy cover and forest structure data in a time efficient and inexpensive manner. Using a GoPro Hero2 (2 mm focal length) camera mounted in the nose cone of a Raven unmanned system, we collected aerial and thermal data monthly during the summer 2013, over two subalpine forests in the Southern Rocky Mountains in Colorado. These forests are dominated by lodgepole pine (Pinus ponderosae) and have experienced insect-driven (primarily mountain pine beetle; MPB, Dendroctonus ponderosae) mortality. Objectives of this study include observations of forest health variables such as canopy water content (CWC) from thermal imagery and leaf area index (LAI), biomass and forest productivity from the Normalized Difference Vegetation Index (NDVI) from UAS imagery. Observations were, validated with ground measurements. Images were processed using a combination of AgiSoft Photoscan professional software and ENVI remote imaging software. We utilized the software Leaf Area Index Calculator (LAIC) developed by Córcoles et al. (2013) for calculating LAI from digital images and modified to conform to leaf area of needle-leaf trees as in Chen and Cihlar (1996) . LAIC uses a K-means cluster analysis to decipher the RGB levels for each pixel and distinguish between green aboveground vegetation and other materials, and project leaf area per unit of ground surface area (i.e. half total needle surface area per unit area). Preliminary LAIC UAS data shows summer average LAI was 3.8 in the most dense forest stands and 2.95 in less dense stands. These data correspond to 4.8 and 2.2 respectively from in situ Licor LAI 2200 measurements (Wilcoxon signed rank p value of 0.25, indicating there is no significant difference between LAIC and field measurements). Imagery over plots indicates about 12% canopy cover from standing dead vegetation within plots, which corresponds to about a 10% estimate of standing dead measured in the field. The next steps for analysis include calculating NDVI and CWC for plot-level vegetation, and scaling to the surrounding forested landscape. These high resolution estimates from UAS imagery will provide forest stand-to- landscape level biophysical data for forest health assessments, management, drought and disturbance monitoring and climate change modeling. Chen, J. M., and J. Cihlar. 1996. Retrieving LAI of boreal conifer forests using Landsat TM images. RSE 55:153-162. Córcoles, J., J. Ortega, D. Hernández, and M. Moreno. 2013. Use of digital photography from unmanned aerial vehicles for estimation of LAI in onion. EJoA. 45:96-104.

  16. Google Sky: A Digital View of the Night Sky

    NASA Astrophysics Data System (ADS)

    Connolly, A. Scranton, R.; Ornduff, T.

    2008-11-01

    From its inception Astronomy has been a visual science, from careful observations of the sky using the naked eye, to the use of telescopes and photographs to map the distribution of stars and galaxies, to the current era of digital cameras that can image the sky over many decades of the electromagnetic spectrum. Sky in Google Earth (http://earth.google.com) and Google Sky (http://www.google.com/sky) continue this tradition, providing an intuitive visual interface to some of the largest astronomical imaging surveys of the sky. Streaming multi-color imagery, catalogs, time domain data, as well as annotating interesting astronomical sources and events with placemarks, podcasts and videos, Sky provides a panchromatic view of the universe accessible to anyone with a computer. Beyond a simple exploration of the sky Google Sky enables users to create and share content with others around the world. With an open interface available on Linux, Mac OS X and Windows, and translations of the content into over 20 different languages we present Sky as the embodiment of a virtual telescope for discovery and sharing the excitement of astronomy and science as a whole.

  17. Characterizing Urban Volumetry Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Santos, T.; Rodrigues, A. M.; Tenedório, J. A.

    2013-05-01

    Urban indicators are efficient tools designed to simplify, quantify and communicate relevant information for land planners. Since urban data has a strong spatial representation, one can use geographical data as the basis for constructing information regarding urban environments. One important source of information about the land status is imagery collected through remote sensing. Afterwards, using digital image processing techniques, thematic detail can be extracted from those images and used to build urban indicators. Most common metrics are based on area (2D) measurements. These include indicators like impervious area per capita or surface occupied by green areas, having usually as primary source a spectral image obtained through a satellite or airborne camera. More recently, laser scanning data has become available for large-scale applications. Such sensors acquire altimetric information and are used to produce Digital Surface Models (DSM). In this context, LiDAR data available for the city is explored along with demographic information, and a framework to produce volumetric (3D) urban indexes is proposed, and measures like Built Volume per capita, Volumetric Density and Volumetric Homogeneity are computed.

  18. Development of digital shade guides for color assessment using a digital camera with ring flashes.

    PubMed

    Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan

    2011-02-01

    Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.

  19. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  20. Three-Dimensional Pathology Specimen Modeling Using "Structure-From-Motion" Photogrammetry: A Powerful New Tool for Surgical Pathology.

    PubMed

    Turchini, John; Buckland, Michael E; Gill, Anthony J; Battye, Shane

    2018-05-30

    - Three-dimensional (3D) photogrammetry is a method of image-based modeling in which data points in digital images, taken from offset viewpoints, are analyzed to generate a 3D model. This modeling technique has been widely used in the context of geomorphology and artificial imagery, but has yet to be used within the realm of anatomic pathology. - To describe the application of a 3D photogrammetry system capable of producing high-quality 3D digital models and its uses in routine surgical pathology practice as well as medical education. - We modeled specimens received in the 2 participating laboratories. The capture and photogrammetry process was automated using user control software, a digital single-lens reflex camera, and digital turntable, to generate a 3D model with the output in a PDF file. - The entity demonstrated in each specimen was well demarcated and easily identified. Adjacent normal tissue could also be easily distinguished. Colors were preserved. The concave shapes of any cystic structures or normal convex rounded structures were discernable. Surgically important regions were identifiable. - Macroscopic 3D modeling of specimens can be achieved through Structure-From-Motion photogrammetry technology and can be applied quickly and easily in routine laboratory practice. There are numerous advantages to the use of 3D photogrammetry in pathology, including improved clinicopathologic correlation for the surgeon and enhanced medical education, revolutionizing the digital pathology museum with virtual reality environments and 3D-printing specimen models.

  1. STS-36 Mission Specialist Hilmers with AEROLINHOF camera on aft flight deck

    NASA Image and Video Library

    1990-03-03

    STS-36 Mission Specialist (MS) David C. Hilmers points the large-format AEROLINHOF camera out overhead window W7 on the aft flight deck of Atlantis, Orbiter Vehicle (OV) 104. Hilmers records Earth imagery using the camera. Hilmers and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.

  2. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  3. Unique digital imagery interface between a silicon graphics computer and the kinetic kill vehicle hardware-in-the-loop simulator (KHILS) wideband infrared scene projector (WISP)

    NASA Astrophysics Data System (ADS)

    Erickson, Ricky A.; Moren, Stephen E.; Skalka, Marion S.

    1998-07-01

    Providing a flexible and reliable source of IR target imagery is absolutely essential for operation of an IR Scene Projector in a hardware-in-the-loop simulation environment. The Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) at Eglin AFB provides the capability, and requisite interfaces, to supply target IR imagery to its Wideband IR Scene Projector (WISP) from three separate sources at frame rates ranging from 30 - 120 Hz. Video can be input from a VCR source at the conventional 30 Hz frame rate. Pre-canned digital imagery and test patterns can be downloaded into stored memory from the host processor and played back as individual still frames or movie sequences up to a 120 Hz frame rate. Dynamic real-time imagery to the KHILS WISP projector system, at a 120 Hz frame rate, can be provided from a Silicon Graphics Onyx computer system normally used for generation of digital IR imagery through a custom CSA-built interface which is available for either the SGI/DVP or SGI/DD02 interface port. The primary focus of this paper is to describe our technical approach and experience in the development of this unique SGI computer and WISP projector interface.

  4. Computerized digital dermoscopy.

    PubMed

    Gewirtzman, A J; Braun, R P

    2003-01-01

    Within the past 15 years, dermoscopy has become a widely used non-invasive technique for physicians to better visualize pigmented lesions. Dermoscopy has helped trained physicians to better diagnose pigmented lesions. Now, the digital revolution is beginning to enhance standard dermoscopic procedures. Using digital dermoscopy, physicians are better able to document pigmented lesions for patient follow-up and to get second opinions, either through teledermoscopy with an expert colleague or by using computer-assisted diagnosis. As the market for digital dermoscopy products begins to grow, so do the number of decisions physicians need to make when choosing a system to fit their needs. The current market for digital dermoscopy includes two varieties of relatively simple and cheap attachments which can convert a consumer digital camera into a digital dermoscope. A coupling adapter acts as a fastener between the camera and an ordinary dermoscope, whereas a dermoscopy attachment includes the dermoscope optics and light source and can be attached directly to the camera. Other options for digital dermoscopy include complete dermoscopy systems that use a hand-held video camera linked directly to a computer. These systems differ from each other in whether or not they are calibrated as well as the quality of the camera and software interface. Another option in digital skin imaging involves spectral analysis rather than dermoscopy. This article serves as a guide to the current systems available and their capabilities.

  5. Optimization of digitization procedures in cultural heritage preservation

    NASA Astrophysics Data System (ADS)

    Martínez, Bea; Mitjà, Carles; Escofet, Jaume

    2013-11-01

    The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.

  6. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    PubMed Central

    Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter

    2017-01-01

    Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038

  7. Global lunar-surface mapping experiment using the Lunar Imager/Spectrometer on SELENE

    NASA Astrophysics Data System (ADS)

    Haruyama, Junichi; Matsunaga, Tsuneo; Ohtake, Makiko; Morota, Tomokatsu; Honda, Chikatoshi; Yokota, Yasuhiro; Torii, Masaya; Ogawa, Yoshiko

    2008-04-01

    The Moon is the nearest celestial body to the Earth. Understanding the Moon is the most important issue confronting geosciences and planetary sciences. Japan will launch the lunar polar orbiter SELENE (Kaguya) (Kato et al., 2007) in 2007 as the first mission of the Japanese long-term lunar exploration program and acquire data for scientific knowledge and possible utilization of the Moon. An optical sensing instrument called the Lunar Imager/Spectrometer (LISM) is loaded on SELENE. The LISM requirements for the SELENE project are intended to provide high-resolution digital imagery and spectroscopic data for the entire lunar surface, acquiring these data for scientific knowledge and possible utilization of the Moon. Actually, LISM was designed to include three specialized sub-instruments: a terrain camera (TC), a multi-band imager (MI), and a spectral profiler (SP). The TC is a high-resolution stereo camera with 10-m spatial resolution from a SELENE nominal altitude of 100 km and a stereo angle of 30° to provide stereo pairs from which digital terrain models (DTMs) with a height resolution of 20 m or better will be produced. The MI is a multi-spectral imager with four and five color bands with 20 m and 60 m spatial resolution in visible and near-infrared ranges, which will provide data to be used to distinguish the geological units in detail. The SP is a line spectral profiler with a 400-m-wide footprint and 300 spectral bands with 6-8 nm spectral resolution in the visible to near-infrared ranges. The SP data will be sufficiently powerful to identify the lunar surface's mineral composition. Moreover, LISM will provide data with a spatial resolution, signal-to-noise ratio, and covered spectral range superior to that of past Earth-based and spacecraft-based observations. In addition to the hardware instrumentation, we have studied operation plans for global data acquisition within the limited total data volume allotment per day. Results show that the TC and MI can achieve global observations within the restrictions by sharing the TC and MI observation periods, adopting appropriate data compression, and executing necessary SELENE orbital plane change operations to ensure global coverage by MI. Pre-launch operation planning has resulted in possible global TC high-contrast imagery, TC stereoscopic imagery, and MI 9-band imagery in one nominal mission period. The SP will also acquire spectral line profiling data for nearly the entire lunar surface. The east-west interval of the SP strip data will be 3-4 km at the equator by the end of the mission and shorter at higher latitudes. We have proposed execution of SELENE roll cant operations three times during the nominal mission period to execute calibration site observations, and have reached agreement on this matter with the SELENE project. We present LISM global surface mapping experiments for instrumentation and operation plans. The ground processing systems and the data release plan for LISM data are discussed briefly.

  8. Lyman-alpha imagery of Comet Kohoutek

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.; Page, T. L.; Meier, R. R.; Prinz, D. K.

    1974-01-01

    Electrographic imagery of Comet Kohoutek in the 1100-1500 A wavelength range was obtained from a sounding rocket on Jan. 8, 1974, and from the Skylab space station on 13 occasions between Nov. 26, 1973 and Feb. 2, 1974. These images are predominantly due to Lyman-alpha (1216 A) emission from the hydrogen coma of the comet. The rocket pictures have been calibrated for absolute sensitivity and a hydrogen production rate has been determined. However, the Skylab camera suffered degradation of its sensitivity during the mission, and its absolute sensitivity for each observation can only be estimated by comparison of the comet images with those taken by the rocket camera, with imagery of the geocoronal Lyman-alpha glow, of the moon in reflected Lyman-alpha, and of ultraviolet-bright stars. The rocket and geocoronal comparisons are used to derive a preliminary, qualitative history of the development of the cometary hydrogen coma and the associated hydrogen production rate.

  9. Formulation of image quality prediction criteria for the Viking lander camera

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Jobson, D. J.; Taylor, E. J.; Wall, S. D.

    1973-01-01

    Image quality criteria are defined and mathematically formulated for the prediction computer program which is to be developed for the Viking lander imaging experiment. The general objective of broad-band (black and white) imagery to resolve small spatial details and slopes is formulated as the detectability of a right-circular cone with surface properties of the surrounding terrain. The general objective of narrow-band (color and near-infrared) imagery to observe spectral characteristics if formulated as the minimum detectable albedo variation. The general goal to encompass, but not exceed, the range of the scene radiance distribution within single, commandable, camera dynamic range setting is also considered.

  10. OSMOSIS: a new joint laboratory between SOFRADIR and ONERA for the development of advanced DDCA with integrated optics

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Matallah, Noura; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Jenouvrier, Pierre; Mallet, Eric; Reibel, Yann

    2014-06-01

    Today, both military and civilian applications require miniaturized optical systems in order to give an imagery function to vehicles with small payload capacity. After the development of megapixel focal plane arrays (FPA) with micro-sized pixels, this miniaturization will become feasible with the integration of optical functions in the detector area. In the field of cooled infrared imaging systems, the detector area is the Detector-Dewar-Cooler Assembly (DDCA). SOFRADIR and ONERA have launched a new research and innovation partnership, called OSMOSIS, to develop disruptive technologies for DDCA to improve the performance and compactness of optronic systems. With this collaboration, we will break down the technological barriers of DDCA, a sealed and cooled environment dedicated to the infrared detectors, to explore Dewar-level integration of optics. This technological breakthrough will bring more compact multipurpose thermal imaging products, as well as new thermal capabilities such as 3D imagery or multispectral imagery. Previous developments will be recalled (SOIE and FISBI cameras) and new developments will be presented. In particular, we will focus on a dual-band MWIR-LWIR camera and a multichannel camera.

  11. Environmental applications utilizing digital aerial imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monday, H.M.

    1995-06-01

    This paper discusses the use of satellite imagery, aerial photography, and computerized airborne imagery as applied to environmental mapping, analysis, and monitoring. A project conducted by the City of Irving, Texas involves compliance with national pollutant discharge elimination system (NPDES) requirements stipulated by the Environmental Protection Agency. The purpose of the project was the development and maintenance of a stormwater drainage utility. Digital imagery was collected for a portion of the city to map the City`s porous and impervious surfaces which will then be overlaid with property boundaries in the City`s existing Geographic information System (GIS). This information will allowmore » the City to determine an equitable tax for each land parcel according to the amount of water each parcel is contributing to the stormwater system. Another project involves environmental compliance for warm water discharges created by utility companies. Environmental consultants are using digital airborne imagery to analyze thermal plume affects as well as monitoring power generation facilities. A third project involves wetland restoration. Due to freeway and other forms of construction, plus a major reduction of fresh water supplies, the Southern California coastal wetlands are being seriously threatened. These wetlands, rich spawning grounds for plant and animal life, are home to thousands of waterfowl and shore birds who use this habitat for nesting and feeding grounds. Under the leadership of Southern California Edison (SCE) and CALTRANS (California Department of Transportation), several wetland areas such as the San Dieguito Lagoon (Del Mar, California), the Sweetwater Marsh (San Diego, California), and the Tijuana Estuary (San Diego, California) are being restored and closely monitored using digital airborne imagery.« less

  12. Digital image film generation: from the photoscientist's perspective

    USGS Publications Warehouse

    Boyd, John E.

    1982-01-01

    The technical sophistication of photoelectronic transducers, integrated circuits, and laser-beam film recorders has made digital imagery an alternative to traditional analog imagery for remote sensing. Because a digital image is stored in discrete digital values, image enhancement is possible before the data are converted to a photographic image. To create a special film-reproduction curve - which can simulate any desired gamma, relative film speed, and toe/shoulder response - the digital-to-analog transfer function of the film recorder is uniquely defined and implemented by a lookup table in the film recorder. Because the image data are acquired in spectral bands, false-color composites also can be given special characteristics by selecting a reproduction curve tailored for each band.

  13. Introduction to the local enhancement of underwater imagery

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    1995-06-01

    Image-based detection of submerged objects is frequently confounded by optical distortions in the aqueous medium. For example, scattering can severly degrade contrast and resolution in underwater (UW) images when illumination systems and cameras are not range-gated. Prior to the development of range-gated imaging, much research emphasis was placed upon the analysis of greyscale imagery acquired under incoherent illumination. Primarily as a result of current emphasis on coherent optical technologies, the progress of image processing (IP) research that pertains to UW imagery has lagged IP hardware and software development. In this paper, we summarize methods for the digital clarification of images that portray actively illuminated UW scenes, i.e., images of floodlit objects. We model the primary UW image components as: a) contrast degradation resulting from illuminant backscattering from the water column, b) a return signal that results from backscattering of the illuminant from the object of regard, and c) resolution loss, due to forward scattering of the return signal. Letting items a) and c) consititute error sources, one can locally apply the appropriate filters to reduce the contribution of such errors. Our technique emphasized local enhancement, as opposed to the global methods used in previous imaging practice. Our enhancement filters are based upon image-algebraic templates that are designed to compensate for the effects of single and multiple scattering as well as absorption within the water column. Discussion is based upon image clarity, algorithmic complexity, and computational efficiency.

  14. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  15. Computer Images for Research, Teaching, and Publication in Art History and Related Disciplines.

    ERIC Educational Resources Information Center

    Rhyne, Charles S.

    The future of digital imagery has emerged as one of the central concerns of professionals in many fields, yet only a handful of art historians have taken advantage of the profession's unique expertise in the reading and interpretation of images. Art historians need to participate in scholarship defining the roles and uses of digital imagery,…

  16. An atlas of November 1978 synthetic aperture radar digitized imagery for oil spill studies

    NASA Technical Reports Server (NTRS)

    Maurer, H. E.; Oderman, W.; Crosswell, W. F.

    1982-01-01

    A data set is described which consists of digitized synthetic aperture radar (SAR) imagery plus correlative data and some preliminary analysis results. This data set should be of value to experimenters who are interested in the SAR instrument and its application to the detection and monitoring of oil on water and other distributed targets.

  17. Low-cost digital dynamic visualization system

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    1995-05-01

    High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.

  18. Converting aerial imagery to application maps

    USDA-ARS?s Scientific Manuscript database

    Over the last couple of years in Agricultural Aviation and at the 2014 and 2015 NAAA conventions, we have written about and presented both single-camera and two-camera imaging systems for use on agricultural aircraft. Many aerial applicators have shown a great deal of interest in the imaging systems...

  19. A comparison of digital multi-spectral imagery versus conventional photography for mapping seagrass in Indian River Lagoon, Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virnstein, R.; Tepera, M.; Beazley, L.

    1997-06-01

    A pilot study is very briefly summarized in the article. The study tested the potential of multi-spectral digital imagery for discrimination of seagrass densities and species, algae, and bottom types. Imagery was obtained with the Compact Airborne Spectral Imager (casi) and two flight lines flown with hyper-spectral mode. The photogrammetric method used allowed interpretation of the highest quality product, eliminating limitations caused by outdated or poor quality base maps and the errors associated with transfer of polygons. Initial image analysis indicates that the multi-spectral imagery has several advantages, including sophisticated spectral signature recognition and classification, ease of geo-referencing, and rapidmore » mosaicking.« less

  20. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  1. Using drone-mounted cameras for on-site body documentation: 3D mapping and active survey.

    PubMed

    Urbanová, Petra; Jurda, Mikoláš; Vojtíšek, Tomáš; Krajsa, Jan

    2017-12-01

    Recent advances in unmanned aerial technology have substantially lowered the cost associated with aerial imagery. As a result, forensic practitioners are today presented with easy low-cost access to aerial photographs at remote locations. The present paper aims to explore boundaries in which the low-end drone technology can operate as professional crime scene equipment, and to test the prospects of aerial 3D modeling in the forensic context. The study was based on recent forensic cases of falls from height admitted for postmortem examinations. Three mock outdoor forensic scenes featuring a dummy, skeletal remains and artificial blood were constructed at an abandoned quarry and subsequently documented using a commercial DJI Phantom 2 drone equipped with a GoPro HERO 4 digital camera. In two of the experiments, the purpose was to conduct aerial and ground-view photography and to process the acquired images with a photogrammetry protocol (using Agisoft PhotoScan ® 1.2.6) in order to generate 3D textured models. The third experiment tested the employment of drone-based video recordings in mapping scattered body parts. The results show that drone-based aerial photography is capable of producing high-quality images, which are appropriate for building accurate large-scale 3D models of a forensic scene. If, however, high-resolution top-down three-dimensional scene documentation featuring details on a corpse or other physical evidence is required, we recommend building a multi-resolution model by processing aerial and ground-view imagery separately. The video survey showed that using an overview recording for seeking out scattered body parts was efficient. In contrast, the less easy-to-spot evidence, such as bloodstains, was detected only after having been marked properly with crime scene equipment. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Seismic Signatures of Brine Release at Blood Falls, Taylor Glacier, Antarctica

    NASA Astrophysics Data System (ADS)

    Carr, C. G.; Pettit, E. C.; Carmichael, J.

    2017-12-01

    Blood Falls is created by the release of subglacially-sourced, iron-rich brine at the surface of Taylor Glacier, McMurdo Dry Valleys, Antarctica. The supraglacial portion of this hydrological feature is episodically active. Englacial liquid brine flow occurs despite ice temperatures of -17°C and we document supraglacial liquid brine release despite ambient air temperatures average -20°C. In this study, we use data from a seismic network, time-lapse cameras, and publicly available weather station data to address the questions: what are the characteristics of seismic events that occur during Blood Falls brine release and how do these compare with seismic events that occur during times of Blood Falls quiescence? How are different processes observable in the time-lapse imagery represented in the seismic record? Time-lapse photography constrains the timing of brine release events during the austral winter of 2014. We use a noise-adaptive digital power detector to identify seismic events and cluster analysis to identify repeating events based on waveform similarity across the network. During the 2014 wintertime brine release, high-energy repeated seismic events occurred proximal to Blood Falls. We investigate the ground motions associated with these clustered events, as well as their spatial distribution. We see evidence of possible tremor during the brine release periods, an indicator of fluid movement. If distinctive seismic signatures are associated with Blood Falls brine release they could be identified based solely on seismic data without any aid from time-lapse cameras. Passive seismologic monitoring has the benefit of continuity during the polar night and other poor visibility conditions, which make time-lapse imagery unusable.

  3. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  4. Using Google Streetview Panoramic Imagery for Geoscience Education

    NASA Astrophysics Data System (ADS)

    De Paor, D. G.; Dordevic, M. M.

    2014-12-01

    Google Streetview is a feature of Google Maps and Google Earth that allows viewers to switch from map or satellite view to 360° panoramic imagery recorded close to the ground. Most panoramas are recorded by Google engineers using special cameras mounted on the roofs of cars. Bicycles, snowmobiles, and boats have also been used and sometimes the camera has been mounted on a backpack for off-road use by hikers and skiers or attached to scuba-diving gear for "Underwater Streetview (sic)." Streetview panoramas are linked together so that the viewer can change viewpoint by clicking forward and reverse buttons. They therefore create a 4-D touring effect. As part of the GEODE project ("Google Earth for Onsite and Distance Education"), we are experimenting with the use of Streetview imagery for geoscience education. Our web-based test application allows instructors to select locations for students to study. Students are presented with a set of questions or tasks that they must address by studying the panoramic imagery. Questions include identification of rock types, structures such as faults, and general geological setting. The student view is locked into Streetview mode until they submit their answers, whereupon the map and satellite views become available, allowing students to zoom out and verify their location on Earth. Student learning is scaffolded by automatic computerized feedback. There are lots of existing Streetview panoramas with rich geological content. Additionally, instructors and members of the general public can create panoramas, including 360° Photo Spheres, by stitching images taken with their mobiles devices and submitting them to Google for evaluation and hosting. A multi-thousand-dollar, multi-directional camera and mount can be purchased from DIY-streetview.com. This allows power users to generate their own high-resolution panoramas. A cheaper, 360° video camera is soon to be released according to geonaute.com. Thus there are opportunities for geoscience educators both to use existing Streetview imagery and to generate new imagery for specific locations of geological interest. The GEODE team includes the authors and: H. Almquist, C. Bentley, S. Burgin, C. Cervato, G. Cooper, P. Karabinos, T. Pavlis, J. Piatek, B. Richards, J. Ryan, R. Schott, K. St. John, B. Tewksbury, and S. Whitmeyer.

  5. Volcano dome dynamics at Mount St. Helens: Deformation and intermittent subsidence monitored by seismicity and camera imagery pixel offsets

    USGS Publications Warehouse

    Salzer, Jacqueline T.; Thelen, Weston A.; James, Mike R.; Walter, Thomas R.; Moran, Seth C.; Denlinger, Roger P.

    2016-01-01

    The surface deformation field measured at volcanic domes provides insights into the effects of magmatic processes, gravity- and gas-driven processes, and the development and distribution of internal dome structures. Here we study short-term dome deformation associated with earthquakes at Mount St. Helens, recorded by a permanent optical camera and seismic monitoring network. We use Digital Image Correlation (DIC) to compute the displacement field between successive images and compare the results to the occurrence and characteristics of seismic events during a 6 week period of dome growth in 2006. The results reveal that dome growth at Mount St. Helens was repeatedly interrupted by short-term meter-scale downward displacements at the dome surface, which were associated in time with low-frequency, large-magnitude seismic events followed by a tremor-like signal. The tremor was only recorded by the seismic stations closest to the dome. We find a correlation between the magnitudes of the camera-derived displacements and the spectral amplitudes of the associated tremor. We use the DIC results from two cameras and a high-resolution topographic model to derive full 3-D displacement maps, which reveals internal dome structures and the effect of the seismic activity on daily surface velocities. We postulate that the tremor is recording the gravity-driven response of the upper dome due to mechanical collapse or depressurization and fault-controlled slumping. Our results highlight the different scales and structural expressions during growth and disintegration of lava domes and the relationships between seismic and deformation signals.

  6. Corn and sorghum phenotyping using a fixed-wing UAV-based remote sensing system

    NASA Astrophysics Data System (ADS)

    Shi, Yeyin; Murray, Seth C.; Rooney, William L.; Valasek, John; Olsenholler, Jeff; Pugh, N. Ace; Henrickson, James; Bowden, Ezekiel; Zhang, Dongyan; Thomasson, J. Alex

    2016-05-01

    Recent development of unmanned aerial systems has created opportunities in automation of field-based high-throughput phenotyping by lowering flight operational cost and complexity and allowing flexible re-visit time and higher image resolution than satellite or manned airborne remote sensing. In this study, flights were conducted over corn and sorghum breeding trials in College Station, Texas, with a fixed-wing unmanned aerial vehicle (UAV) carrying two multispectral cameras and a high-resolution digital camera. The objectives were to establish the workflow and investigate the ability of UAV-based remote sensing for automating data collection of plant traits to develop genetic and physiological models. Most important among these traits were plant height and number of plants which are currently manually collected with high labor costs. Vegetation indices were calculated for each breeding cultivar from mosaicked and radiometrically calibrated multi-band imagery in order to be correlated with ground-measured plant heights, populations and yield across high genetic-diversity breeding cultivars. Growth curves were profiled with the aerial measured time-series height and vegetation index data. The next step of this study will be to investigate the correlations between aerial measurements and ground truth measured manually in field and from lab tests.

  7. Automated 3D architecture reconstruction from photogrammetric structure-and-motion: A case study of the One Pilla pagoda, Hanoi, Vienam

    NASA Astrophysics Data System (ADS)

    To, T.; Nguyen, D.; Tran, G.

    2015-04-01

    Heritage system of Vietnam has decline because of poor-conventional condition. For sustainable development, it is required a firmly control, space planning organization, and reasonable investment. Moreover, in the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used. With the potential of high-resolution, low-cost, large field of view, easiness, rapidity and completeness, the derivation of 3D metric information from Structure-and- Motion images is receiving great attention. In addition, heritage objects in form of 3D physical models are recorded not only for documentation issues, but also for historical interpretation, restoration, cultural and educational purposes. The study suggests the archaeological documentation of the "One Pilla" pagoda placed in Hanoi capital, Vietnam. The data acquired through digital camera Cannon EOS 550D, CMOS APS-C sensor 22.3 x 14.9 mm. Camera calibration and orientation were carried out by VisualSFM, CMPMVS (Multi-View Reconstruction) and SURE (Photogrammetric Surface Reconstruction from Imagery) software. The final result represents a scaled 3D model of the One Pilla Pagoda and displayed different views in MeshLab software.

  8. Oblique Aerial Photography Tool for Building Inspection and Damage Assessment

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.

    2014-11-01

    Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.

  9. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    NASA Astrophysics Data System (ADS)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  10. Investigation of Skylab imagery for regional planning. [New York, New Jersey, and Connecticut

    NASA Technical Reports Server (NTRS)

    Harting, W. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. It is feasible to use earth terrain camera imagery to detect four land uses (vacant land, developed land, streets, and water) for general regional planning purposes. Multispectral imagery is suitable for detecting, mapping, and measuring water bodies as small as two acres. Sufficient information can be extracted to prepare graphic and pictorial representations of the general growth and development patterns, but cannot be incorporated into an inventory file for predictive models.

  11. Far ultraviolet wide field imaging and photometry - Spartan-202 Mark II Far Ultraviolet Camera

    NASA Technical Reports Server (NTRS)

    Carruthers, George R.; Heckathorn, Harry M.; Opal, Chet B.; Witt, Adolf N.; Henize, Karl G.

    1988-01-01

    The U.S. Naval Research Laboratory' Mark II Far Ultraviolet Camera, which is expected to be a primary scientific instrument aboard the Spartan-202 Space Shuttle mission, is described. This camera is intended to obtain FUV wide-field imagery of stars and extended celestial objects, including diffuse nebulae and nearby galaxies. The observations will support the HST by providing FUV photometry of calibration objects. The Mark II camera is an electrographic Schmidt camera with an aperture of 15 cm, a focal length of 30.5 cm, and sensitivity in the 1230-1600 A wavelength range.

  12. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  13. Radar data processing and analysis

    NASA Technical Reports Server (NTRS)

    Ausherman, D.; Larson, R.; Liskow, C.

    1976-01-01

    Digitized four-channel radar images corresponding to particular areas from the Phoenix and Huntington test sites were generated in conjunction with prior experiments performed to collect X- and L-band synthetic aperture radar imagery of these two areas. The methods for generating this imagery are documented. A secondary objective was the investigation of digital processing techniques for extraction of information from the multiband radar image data. Following the digitization, the remaining resources permitted a preliminary machine analysis to be performed on portions of the radar image data. The results, although necessarily limited, are reported.

  14. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    PubMed

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  15. Characterization of instream hydraulic and riparian habitat conditions and stream temperatures of the Upper White River Basin, Washington, using multispectral imaging systems

    USGS Publications Warehouse

    Black, Robert W.; Haggland, Alan; Crosby, Greg

    2003-01-01

    Instream hydraulic and riparian habitat conditions and stream temperatures were characterized for selected stream segments in the Upper White River Basin, Washington. An aerial multispectral imaging system used digital cameras to photograph the stream segments across multiple wavelengths to characterize fish habitat and temperature conditions. All imageries were georeferenced. Fish habitat features were photographed at a resolution of 0.5 meter and temperature imageries were photographed at a 1.0-meter resolution. The digital multispectral imageries were classified using commercially available software. Aerial photographs were taken on September 21, 1999. Field habitat data were collected from August 23 to October 12, 1999, to evaluate the measurement accuracy and effectiveness of the multispectral imaging in determining the extent of the instream habitat variables. Fish habitat types assessed by this method were the abundance of instream hydraulic features such as pool and riffle habitats, turbulent and non-turbulent habitats, riparian composition, the abundance of large woody debris in the stream and riparian zone, and stream temperatures. Factors such as the abundance of instream woody debris, the location and frequency of pools, and stream temperatures generally are known to have a significant impact on salmon. Instream woody debris creates the habitat complexity necessary to maintain a diverse and healthy salmon population. The abundance of pools is indicative of a stream's ability to support fish and other aquatic organisms. Changes in water temperature can affect aquatic organisms by altering metabolic rates and oxygen requirements, altering their sensitivity to toxic materials and affecting their ability to avoid predators. The specific objectives of this project were to evaluate the use of an aerial multispectral imaging system to accurately identify instream hydraulic features and surface-water temperatures in the Upper White River Basin, to use the multispectral system to help establish baseline instream/riparian habitat conditions in the study area, and to qualitatively assess the imaging system for possible use in other Puget Sound rivers. For the most part, all multispectral imagery-based estimates of total instream riffle and pool area were less than field measurements. The imagery-based estimates for riffle habitat area ranged from 35.5 to 83.3 percent less than field measurements. Pool habitat estimates ranged from 139.3 percent greater than field measurements to 94.0 percent less than field measurements. Multispectral imagery-based estimates of turbulent habitat conditions ranged from 9.3 percent greater than field measurements to 81.6 percent less than field measurements. Multispectral imagery-based estimates of non-turbulent habitat conditions ranged from 27.7 to 74.1 percent less than field measurements. The absolute average percentage of difference between field and imagery-based habitat type areas was less for the turbulent and non-turbulent habitat type categories than for pools and riffles. The estimate of woody debris by multispectral imaging was substantially different than field measurements; percentage of differences ranged from +373.1 to -100 percent. Although the total area of riffles, pools, and turbulent and non-turbulent habitat types measured in the field were all substantially higher than those estimated from the multispectral imagery, the percentage of composition of each habitat type was not substantially different between the imagery-based estimates and field measurements.

  16. Can light-field photography ease focusing on the scalp and oral cavity?

    PubMed

    Taheri, Arash; Feldman, Steven R

    2013-08-01

    Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  18. STS-36 Mission Specialist Mullane uses 70mm HASSELBLAD camera on flight deck

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-36 Mission Specialist Richard M. Mullane points 70mm HASSELBLAD camera out overhead window W8 on the aft flight deck of Atlantis, Orbiter Vehicle (OV) 104. Mullane is recording Earth imagery with the camera. Mullane and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission. Note: Mullane is wearing a orange 'Tigers' t-shirt.

  19. Ground-Truthing Moderate Resolution Satellite Imagery with Near-Surface Canopy Images in Hawai'i's Tropical Cloud Forests

    NASA Astrophysics Data System (ADS)

    Bergstrom, R.; Miura, T.; Lepczyk, C.; Giambelluca, T. W.; Nullet, M. A.; Nagai, S.

    2012-12-01

    Phenological studies are gaining importance globally as the onset of climate change is impacting the timing of green up and senescence in forest canopies and agricultural regions. Many studies use and analyze land surface phenology (LSP) derived from satellite vegetation index time series (VI's) such as those from Moderate Resolution Imaging Spectroradiometer (MODIS) to monitor changes in phenological events. Seasonality is expected in deciduous temperate forests, while tropical regions are predicted to show more static reflectance readings given their stable and steady state. Due to persistent cloud cover and atmospheric interference in tropical regions, satellite VI time series are often subject to uncertainties and thus require near surface vegetation monitoring systems for ground-truthing. This study has been designed to assess the precision of MODIS phenological signatures using above-canopy, down-looking digital cameras installed on flux towers on the Island of Hawai'i. The cameras are part of the expanding Phenological Eyes Network (PEN) which has been implementing a global network of above-canopy, hemispherical digital cameras for forest and agricultural phenological monitoring. Cameras have been installed at two locations in Hawaii - one on a flux tower in close proximity to the Thurston Lave Tube (HVT) in Hawai'i Volcanoes National Park and the other on a weather station in a section of the Hawaiian Tropical Experimental Forest in Laupaphoehoe (LEF). HVT consists primarily of a single canopy species, ohi'a lehua (Metrosideros polymorpha), with an understory of hapu'u ferns (Cibotium spp), while LEF is similarly comprised with an additional dominant species, Koa (Acacia Koa), included in the canopy structure. Given these species' characteristics, HVT is expected to show little seasonality, while LEF has the potential to deviate slightly during periods following dry and wet seasons. MODIS VI time series data are being analyzed and will be compared to images from the cameras which will have VI's extracted from their RGB image planes and will be normalized to be comparable with MODIS VI's. Given Hawai'i's susceptibility to invasion and delicacy of its endemic species, results from this study will provide necessary site specific detail in determining the reliability of satellite based inference in similar tropical phenology studies. Should satellite images provide adequate information, results from this study will allow for extrapolation across similar understudied tropical forests.

  20. Measuring Distances Using Digital Cameras

    ERIC Educational Resources Information Center

    Kendal, Dave

    2007-01-01

    This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…

  1. Camera! Action! Collaborate with Digital Moviemaking

    ERIC Educational Resources Information Center

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  2. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    NASA Astrophysics Data System (ADS)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.

  3. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  4. Evaluation of Ocean Color Scanner (OCS) photographic and digital data: Santa Barbara Channel test site, 29 October 1975 overflight

    NASA Technical Reports Server (NTRS)

    Kraus, S. P.; Estes, J. E.; Kronenberg, M. R.; Hajic, E. J.

    1977-01-01

    A summary of Ocean Color Scanner data was examined to evaluate detection and discrimination capabilities of the system for marine resources, oil pollution and man-made sea surface targets of opportunity in the Santa Barbara Channel. Assessment of the utility of OCS for the determination of sediment transport patterns along the coastal zone was a secondary goal. Data products provided 1975 overflight were in digital and analog formats. In evaluating the OCS data, automated and manual procedures were employed. A total of four channels of data in digital format were analyzed, as well as three channels of color combined imagery, and four channels of black and white imagery. In addition, 1:120,000 scale color infrared imagery acquired simultaneously with the OCS data were provided for comparative analysis purposes.

  5. High Scalability Video ISR Exploitation

    DTIC Science & Technology

    2012-10-01

    Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown

  6. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  7. Organize Your Digital Photos: Display Your Images Without Hogging Hard-Disk Space

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    According to InfoTrends/CAP Ventures, by the end of this year more than 55 percent of all U.S. households will own at least one digital camera. With so many digital cameras in use, it is important for people to understand how to organize and store digital images in ways that make them easy to find. Additionally, today's affordable, large megapixel…

  8. Digital Motion Imagery, Interoperability Challenges for Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2012-01-01

    With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.

  9. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  10. Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation

    NASA Technical Reports Server (NTRS)

    Lee, George

    1992-01-01

    A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.

  11. Using Unmanned Aerial Vehicles in Postfire Vegetation Survey Campaigns through Large and Heterogeneous Areas: Opportunities and Challenges.

    PubMed

    Fernández-Guisuraga, José Manuel; Sanz-Ablanedo, Enoc; Suárez-Seoane, Susana; Calvo, Leonor

    2018-02-14

    This study evaluated the opportunities and challenges of using drones to obtain multispectral orthomosaics at ultra-high resolution that could be useful for monitoring large and heterogeneous burned areas. We conducted a survey using an octocopter equipped with a Parrot SEQUOIA multispectral camera in a 3000 ha framework located within the perimeter of a megafire in Spain. We assessed the quality of both the camera raw imagery and the multispectral orthomosaic obtained, as well as the required processing capability. Additionally, we compared the spatial information provided by the drone orthomosaic at ultra-high spatial resolution with another image provided by the WorldView-2 satellite at high spatial resolution. The drone raw imagery presented some anomalies, such as horizontal banding noise and non-homogeneous radiometry. Camera locations showed a lack of synchrony of the single frequency GPS receiver. The georeferencing process based on ground control points achieved an error lower than 30 cm in X-Y and lower than 55 cm in Z. The drone orthomosaic provided more information in terms of spatial variability in heterogeneous burned areas in comparison with the WorldView-2 satellite imagery. The drone orthomosaic could constitute a viable alternative for the evaluation of post-fire vegetation regeneration in large and heterogeneous burned areas.

  12. Using Unmanned Aerial Vehicles in Postfire Vegetation Survey Campaigns through Large and Heterogeneous Areas: Opportunities and Challenges

    PubMed Central

    2018-01-01

    This study evaluated the opportunities and challenges of using drones to obtain multispectral orthomosaics at ultra-high resolution that could be useful for monitoring large and heterogeneous burned areas. We conducted a survey using an octocopter equipped with a Parrot SEQUOIA multispectral camera in a 3000 ha framework located within the perimeter of a megafire in Spain. We assessed the quality of both the camera raw imagery and the multispectral orthomosaic obtained, as well as the required processing capability. Additionally, we compared the spatial information provided by the drone orthomosaic at ultra-high spatial resolution with another image provided by the WorldView-2 satellite at high spatial resolution. The drone raw imagery presented some anomalies, such as horizontal banding noise and non-homogeneous radiometry. Camera locations showed a lack of synchrony of the single frequency GPS receiver. The georeferencing process based on ground control points achieved an error lower than 30 cm in X-Y and lower than 55 cm in Z. The drone orthomosaic provided more information in terms of spatial variability in heterogeneous burned areas in comparison with the WorldView-2 satellite imagery. The drone orthomosaic could constitute a viable alternative for the evaluation of post-fire vegetation regeneration in large and heterogeneous burned areas. PMID:29443914

  13. A Simple Spectrophotometer Using Common Materials and a Digital Camera

    ERIC Educational Resources Information Center

    Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal

    2011-01-01

    A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…

  14. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  15. Quantifying plant colour and colour difference as perceived by humans using digital images.

    PubMed

    Kendal, Dave; Hauser, Cindy E; Garrard, Georgia E; Jellinek, Sacha; Giljohann, Katherine M; Moore, Joslin L

    2013-01-01

    Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management.

  16. Quantifying Plant Colour and Colour Difference as Perceived by Humans Using Digital Images

    PubMed Central

    Kendal, Dave; Hauser, Cindy E.; Garrard, Georgia E.; Jellinek, Sacha; Giljohann, Katherine M.; Moore, Joslin L.

    2013-01-01

    Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management. PMID:23977275

  17. Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras

    DTIC Science & Technology

    1990-04-01

    poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital

  18. A Picture is Worth a Thousand Words

    ERIC Educational Resources Information Center

    Davison, Sarah

    2009-01-01

    Lions, tigers, and bears, oh my! Digital cameras, young inquisitive scientists, give it a try! In this project, students create an open-ended question for investigation, capture and record their observations--data--with digital cameras, and create a digital story to share their findings. The project follows a 5E learning cycle--Engage, Explore,…

  19. Software Graphical User Interface For Analysis Of Images

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn

    1992-01-01

    CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.

  20. Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean

    PubMed Central

    Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave

    2009-01-01

    Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729

  1. Assessing urban forest canopy cover using airborne or satellite imagery

    Treesearch

    Jeffrey T. Walton; David J. Nowak; Eric J. Greenfield

    2008-01-01

    With the availability of many sources of imagery and various digital classification techniques, assessing urban forest canopy cover is readily accessible to most urban forest managers. Understanding the capability and limitations of various types of imagery and classification methods is essential to interpreting canopy cover values. An overview of several remote...

  2. Using DSLR cameras in digital holography

    NASA Astrophysics Data System (ADS)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  3. Historical Orthoimagery of the Lake Tahoe Basin

    USGS Publications Warehouse

    Soulard, Christopher E.; Raumann, Christian G.

    2008-01-01

    The U.S. Geological Survey (USGS) Western Geographic Science Center has developed a series of historical digital orthoimagery (HDO) datasets covering part or all of the Lake Tahoe Basin. Three datasets are available: (A) 1940 HDOs for the southern Lake Tahoe Basin, (B) 1969 HDOs for the entire Lake Tahoe Basin, and (C) 1987 HDOs for the southern Lake Tahoe Basin. The HDOs (for 1940, 1969, and 1987) were compiled photogrammically from aerial photography with varying scales, camera characteristics, image quality, and capture dates. The resulting datasets have a 1-meter horizontal resolution. Precision-corrected Ikonos multispectral satellite imagery was used as a substitute for HDOs/DOQs for the 2002 imagery date, but these data are not available for download in this series due to licensing restrictions. The projection of the HDO data is set to UTM Zone 10, NAD 1983. The data for each of the three available dates are clipped into files that spatially approximate the 3.75-minute USGS quarter quadrangles (roughly 3,000 to 4,000 hectares), and have roughly 100 pixels (or 100 meters) of overlap to facilitate combining the files into larger regions without data gaps. The files are named after 3.75-minute USGS quarter quadrangles that cover the same general spatial extent. These files are available in the ERDAS Imagine (.img) format.

  4. Thermal photogrammetric imaging: A new technique for monitoring dome eruptions

    NASA Astrophysics Data System (ADS)

    Thiele, Samuel T.; Varley, Nick; James, Mike R.

    2017-05-01

    Structure-from-motion (SfM) algorithms greatly facilitate the generation of 3-D topographic models from photographs and can form a valuable component of hazard monitoring at active volcanic domes. However, model generation from visible imagery can be prevented due to poor lighting conditions or surface obscuration by degassing. Here, we show that thermal images can be used in a SfM workflow to mitigate these issues and provide more continuous time-series data than visible-light equivalents. We demonstrate our methodology by producing georeferenced photogrammetric models from 30 near-monthly overflights of the lava dome that formed at Volcán de Colima (Mexico) between 2013 and 2015. Comparison of thermal models with equivalents generated from visible-light photographs from a consumer digital single lens reflex (DSLR) camera suggests that, despite being less detailed than their DSLR counterparts, the thermal models are more than adequate reconstructions of dome geometry, giving volume estimates within 10% of those derived using the DSLR. Significantly, we were able to construct thermal models in situations where degassing and poor lighting prevented the construction of models from DSLR imagery, providing substantially better data continuity than would have otherwise been possible. We conclude that thermal photogrammetry provides a useful new tool for monitoring effusive volcanic activity and assessing associated volcanic risks.

  5. Application of a Near Infrared Imaging System for Thermographic Imaging of the Space Shuttle during Hypersonic Re-Entry

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Tietjen, Alan B.; Horvath, Thomas J.; Tomek, Deborah M.; Gibson, David M.; Taylor, Jeff C.; Tack, Steve; Bush, Brett C.; Mercer, C. David; Shea, Edward J.

    2010-01-01

    High resolution calibrated near infrared (NIR) imagery was obtained of the Space Shuttle s reentry during STS-119, STS-125, and STS-128 missions. The infrared imagery was collected using a US Navy NP-3D Orion aircraft using a long-range infrared optical package referred to as Cast Glance. The slant ranges between the Space Shuttle and Cast Glance were approximately 26-41 nautical miles at point of closest approach. The Hypersonic Thermodynamic Infrared Measurements (HYTHIRM) project was a NASA Langley led endeavor sponsored by the NASA Engineering Safety Center, the Space Shuttle Program Office and the NASA Aeronautics Research Mission Directorate to demonstrate a quantitative thermal imaging capability. HYTHIRM required several mission tools to acquire the imagery. These tools include pre-mission acquisition simulations of the Shuttle trajectory in relationship to the Cast Glance aircraft flight path, radiance modeling to predict the infrared response of the Shuttle, and post mission analysis tools to process the infrared imagery to quantitative temperature maps. The spatially resolved global thermal measurements made during the Shuttle s hypersonic reentry provides valuable flight data for reducing the uncertainty associated with present day ground-to-flight extrapolation techniques and current state-of-the-art empirical boundary-layer transition or turbulent heating prediction methods. Laminar and turbulent flight data is considered critical for the development of turbulence models supporting NASA s next-generation spacecraft. This paper will provide the motivation and details behind the use of an upgraded NIR imaging system used onboard a Navy Cast Glance aircraft and describe the characterizations and procedures performed to obtain quantitative temperature maps. A brief description and assessment will be provided of the previously used analog NIR camera along with image examples from Shuttle missions STS-121, STS-115, and solar tower test. These thermal observations confirmed the challenge of a long-range acquisition during re-entry. These challenges are due to unknown atmospheric conditions, image saturation, vibration etc. This provides the motivation for the use of a digital NIR sensor. The characterizations performed on the digital NIR sensor included radiometric, spatial, and spectral measurements using blackbody radiation sources and known targets. An assessment of the collected data for three Space Shuttle atmospheric re-entries, STS-119, STS-125, and STS-128, are provided along with a description of various events of interest captured using the digital NIR imaging system such as RCS firings and boundary layer transitions. Lastly the process used to convert the raw image counts to quantitative temperatures is presented along with comparisons to the Space Shuttle's onboard thermocouples.

  6. Digital processing of Mariner 9 television data.

    NASA Technical Reports Server (NTRS)

    Green, W. B.; Seidman, J. B.

    1973-01-01

    The digital image processing performed by the Image Processing Laboratory (IPL) at JPL in support of the Mariner 9 mission is summarized. The support is divided into the general categories of image decalibration (the removal of photometric and geometric distortions from returned imagery), computer cartographic projections in support of mapping activities, and adaptive experimenter support (flexible support to provide qualitative digital enhancements and quantitative data reduction of returned imagery). Among the tasks performed were the production of maximum discriminability versions of several hundred frames to support generation of a geodetic control net for Mars, and special enhancements supporting analysis of Phobos and Deimos images.

  7. KA-102 Film/EO Standoff System

    NASA Astrophysics Data System (ADS)

    Turpin, Richard T.

    1984-12-01

    The KA-102 is an in-flight selectable film or electro-optic (EU) visible reconnaissance camera with a real-time data link. The lens is a 66-in., f/4 refractor with a 4° field-of-view. The focal plane is a continuous line array of 10,240 COD elements that opera tes in the pushbroom mode. In the film mode, the camera use standard 5-in.-wide 3414 or 3412 film. The E0 imagery is transmitted up to 500 n.mi. to the ground station over a 75-Mbit/sec )(- band data link via a relay aircraft (see Figure 1). The camera may be controlled from the ground station via an uplink or from the cockpit control panel. The 8-ft-diameter ground tracking antenna is located on high ground and linked to the ground station via a 1-mile-long, two-way fiber optic system. In the ground station the imagery is calibrated and displayed in real time on three crt's. Selected imagery may be stored on disk and enhanced, analyzed, and annotated in near-real-time. The imagery may be enhanced and magnified in real time. Hardcopy frames may be made on 8 x 10-in. Polaroid, 35-1m film, or dry silver paper. All the received image and engineering data is recorded on a high-density tape recorder. The aircraft track is recorded on a map plotter. Ground support equipment (GSE), manuals, spares, and training are included in the system. Falcon 20 aircraft were modified on a subcontract to Dynelectron--Ft. Worth.

  8. Practical use of video imagery in nearshore oceanographic field studies

    USGS Publications Warehouse

    Holland, K.T.; Holman, R.A.; Lippmann, T.C.; Stanley, J.; Plant, N.

    1997-01-01

    An approach was developed for using video imagery to quantify, in terms of both spatial and temporal dimensions, a number of naturally occurring (nearshore) physical processes. The complete method is presented, including the derivation of the geometrical relationships relating image and ground coordinates, principles to be considered when working with video imagery and the two-step strategy for calibration of the camera model. The techniques are founded on the principles of photogrammetry, account for difficulties inherent in the use of video signals, and have been adapted to allow for flexibility of use in field studies. Examples from field experiments indicate that this approach is both accurate and applicable under the conditions typically experienced when sampling in coastal regions. Several applications of the camera model are discussed, including the measurement of nearshore fluid processes, sand bar length scales, foreshore topography, and drifter motions. Although we have applied this method to the measurement of nearshore processes and morphologic features, these same techniques are transferable to studies in other geophysical settings.

  9. BLM Unmanned Aircraft Systems (UAS) Resource Management Operations

    NASA Astrophysics Data System (ADS)

    Hatfield, M. C.; Breen, A. L.; Thurau, R.

    2016-12-01

    The Department of the Interior Bureau of Land Management is funding research at the University of Alaska Fairbanks to study Unmanned Aircraft Systems (UAS) Resource Management Operations. In August 2015, the team conducted flight research at UAF's Toolik Field Station (TFS). The purpose was to determine the most efficient use of small UAS to collect low-altitude airborne digital stereo images, process the stereo imagery into close-range photogrammetry products, and integrate derived imagery products into the BLM's National Assessment, Inventory and Monitoring (AIM) Strategy. The AIM Strategy assists managers in answering questions of land resources at all organizational levels and develop management policy at regional and national levels. In Alaska, the BLM began to implement its AIM strategy in the National Petroleum Reserve-Alaska (NPR-A) in 2012. The primary goals of AIM-monitoring at the NPR-A are to implement an ecological baseline to monitor ecological trends, and to develop a monitoring network to understand the efficacy of management decisions. The long-term AIM strategy also complements other ongoing NPR-A monitoring processes, collects multi-use and multi-temporal data, and supports understanding of ecosystem management strategies in order to implement defensible natural resource management policy. The campaign measured vegetation types found in the NPR-A, using UAF's TFS location as a convenient proxy. The vehicle selected was the ACUASI Ptarmigan, a small hexacopter (based on DJI S800 airframe and 3DR autopilot) capable of carrying a 1.5 kg payload for 15 min for close-range environmental monitoring missions. The payload was a stereo camera system consisting of Sony NEX7's with various lens configurations (16/20/24/35 mm). A total of 77 flights were conducted over a 4 ½ day period, with 1.5 TB of data collected. Mission variables included camera height, UAS speed, transect overlaps, and camera lenses/settings. Invaluable knowledge was gained as to limitations and opportunities for field deployment of UAS relative to local conditions and vegetation type. Future efforts will focus of refining data analysis techniques and further optimizing UAS/sensor combinations and flight profiles.

  10. Satellite Imagery Via Personal Computer

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Automatic Picture Transmission (APT) was incorporated by NASA in the Tiros 8 weather satellite. APT included an advanced satellite camera that immediately transmitted a picture as well as low cost receiving equipment. When an advanced scanning radiometer was later introduced, ground station display equipment would not readily adjust to the new format until GSFC developed an APT Digital Scan Converter that made them compatible. A NASA Technical Note by Goddard's Vermillion and Kamoski described how to build a converter. In 1979, Electro-Services, using this technology, built the first microcomputer weather imaging system in the U.S. The company changed its name to Satellite Data Systems, Inc. and now manufactures the WeatherFax facsimile display graphics system which converts a personal computer into a weather satellite image acquisition and display workstation. Hardware, antennas, receivers, etc. are also offered. Customers include U.S. Weather Service, schools, military, etc.

  11. NASA Tech Briefs, March 2014

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Topics include: Data Fusion for Global Estimation of Forest Characteristics From Sparse Lidar Data; Debris and Ice Mapping Analysis Tool - Database; Data Acquisition and Processing Software - DAPS; Metal-Assisted Fabrication of Biodegradable Porous Silicon Nanostructures; Post-Growth, In Situ Adhesion of Carbon Nanotubes to a Substrate for Robust CNT Cathodes; Integrated PEMFC Flow Field Design for Gravity-Independent Passive Water Removal; Thermal Mechanical Preparation of Glass Spheres; Mechanistic-Based Multiaxial-Stochastic-Strength Model for Transversely-Isotropic Brittle Materials; Methods for Mitigating Space Radiation Effects, Fault Detection and Correction, and Processing Sensor Data; Compact Ka-Band Antenna Feed with Double Circularly Polarized Capability; Dual-Leadframe Transient Liquid Phase Bonded Power Semiconductor Module Assembly and Bonding Process; Quad First Stage Processor: A Four-Channel Digitizer and Digital Beam-Forming Processor; Protective Sleeve for a Pyrotechnic Reefing Line Cutter; Metabolic Heat Regenerated Temperature Swing Adsorption; CubeSat Deployable Log Periodic Dipole Array; Re-entry Vehicle Shape for Enhanced Performance; NanoRacks-Scale MEMS Gas Chromatograph System; Variable Camber Aerodynamic Control Surfaces and Active Wing Shaping Control; Spacecraft Line-of-Sight Stabilization Using LWIR Earth Signature; Technique for Finding Retro-Reflectors in Flash LIDAR Imagery; Novel Hemispherical Dynamic Camera for EVAs; 360 deg Visual Detection and Object Tracking on an Autonomous Surface Vehicle; Simulation of Charge Carrier Mobility in Conducting Polymers; Observational Data Formatter Using CMOR for CMIP5; Propellant Loading Physics Model for Fault Detection Isolation and Recovery; Probabilistic Guidance for Swarms of Autonomous Agents; Reducing Drift in Stereo Visual Odometry; Future Air-Traffic Management Concepts Evaluation Tool; Examination and A Priori Analysis of a Direct Numerical Simulation Database for High-Pressure Turbulent Flows; and Resource-Constrained Application of Support Vector Machines to Imagery.

  12. Remote sensing with simulated unmanned aircraft imagery for precision agriculture applications

    USGS Publications Warehouse

    Hunt, E. Raymond; Daughtry, Craig S.T.; Mirsky, Steven B.; Hively, W. Dean

    2014-01-01

    An important application of unmanned aircraft systems (UAS) may be remote-sensing for precision agriculture, because of its ability to acquire images with very small pixel sizes from low altitude flights. The objective of this study was to compare information obtained from two different pixel sizes, one about a meter (the size of a small vegetation plot) and one about a millimeter. Cereal rye (Secale cereale) was planted at the Beltsville Agricultural Research Center for a winter cover crop with fall and spring fertilizer applications, which produced differences in biomass and leaf chlorophyll content. UAS imagery was simulated by placing a Fuji IS-Pro UVIR digital camera at 3-m height looking nadir. An external UV-IR cut filter was used to acquire true-color images; an external red cut filter was used to obtain color-infrared-like images with bands at near-infrared, green, and blue wavelengths. Plot-scale Green Normalized Difference Vegetation Index was correlated with dry aboveground biomass ( ${mbi {r}} = 0.58$ ), whereas the Triangular Greenness Index (TGI) was not correlated with chlorophyll content. We used the SamplePoint program to select 100 pixels systematically; we visually identified the cover type and acquired the digital numbers. The number of rye pixels in each image was better correlated with biomass ( ${mbi {r}} = 0.73$ ), and the average TGI from only leaf pixels was negatively correlated with chlorophyll content ( ${mbi {r}} = -0.72$ ). Thus, better information for crop requirements may be obtained using very small pixel sizes, but new algorithms based on computer vision are needed for analysis. It may not be necessary to geospatially register large numbers of photographs with very small pixel sizes. Instead, images could be analyzed as single plots along field transects.

  13. Google Haul Out: Earth Observation Imagery and Digital Aerial Surveys in Coastal Wildlife Management and Abundance Estimation

    PubMed Central

    Moxley, Jerry H.; Bogomolni, Andrea; Hammill, Mike O.; Moore, Kathleen M. T.; Polito, Michael J.; Sette, Lisa; Sharp, W. Brian; Waring, Gordon T.; Gilbert, James R.; Halpin, Patrick N.; Johnston, David W.

    2017-01-01

    Abstract As the sampling frequency and resolution of Earth observation imagery increase, there are growing opportunities for novel applications in population monitoring. New methods are required to apply established analytical approaches to data collected from new observation platforms (e.g., satellites and unmanned aerial vehicles). Here, we present a method that estimates regional seasonal abundances for an understudied and growing population of gray seals (Halichoerus grypus) in southeastern Massachusetts, using opportunistic observations in Google Earth imagery. Abundance estimates are derived from digital aerial survey counts by adapting established correction-based analyses with telemetry behavioral observation to quantify survey biases. The result is a first regional understanding of gray seal abundance in the northeast US through opportunistic Earth observation imagery and repurposed animal telemetry data. As species observation data from Earth observation imagery become more ubiquitous, such methods provide a robust, adaptable, and cost-effective solution to monitoring animal colonies and understanding species abundances. PMID:29599542

  14. Google Haul Out: Earth Observation Imagery and Digital Aerial Surveys in Coastal Wildlife Management and Abundance Estimation.

    PubMed

    Moxley, Jerry H; Bogomolni, Andrea; Hammill, Mike O; Moore, Kathleen M T; Polito, Michael J; Sette, Lisa; Sharp, W Brian; Waring, Gordon T; Gilbert, James R; Halpin, Patrick N; Johnston, David W

    2017-08-01

    As the sampling frequency and resolution of Earth observation imagery increase, there are growing opportunities for novel applications in population monitoring. New methods are required to apply established analytical approaches to data collected from new observation platforms (e.g., satellites and unmanned aerial vehicles). Here, we present a method that estimates regional seasonal abundances for an understudied and growing population of gray seals (Halichoerus grypus) in southeastern Massachusetts, using opportunistic observations in Google Earth imagery. Abundance estimates are derived from digital aerial survey counts by adapting established correction-based analyses with telemetry behavioral observation to quantify survey biases. The result is a first regional understanding of gray seal abundance in the northeast US through opportunistic Earth observation imagery and repurposed animal telemetry data. As species observation data from Earth observation imagery become more ubiquitous, such methods provide a robust, adaptable, and cost-effective solution to monitoring animal colonies and understanding species abundances.

  15. Cartographic services contract...for everything geographic

    USGS Publications Warehouse

    ,

    2003-01-01

    The U.S. Geological Survey's (USGS) Cartographic Services Contract (CSC) is used to award work for photogrammetric and mapping services under the umbrella of Architect-Engineer (A&E) contracting. The A&E contract is broad in scope and can accommodate any activity related to standard, nonstandard, graphic, and digital cartographic products. Services provided may include, but are not limited to, photogrammetric mapping and aerotriangulation; orthophotography; thematic mapping (for example, land characterization); analog and digital imagery applications; geographic information systems development; surveying and control acquisition, including ground-based and airborne Global Positioning System; analog and digital image manipulation, analysis, and interpretation; raster and vector map digitizing; data manipulations (for example, transformations, conversions, generalization, integration, and conflation); primary and ancillary data acquisition (for example, aerial photography, satellite imagery, multispectral, multitemporal, and hyperspectral data); image scanning and processing; metadata production, revision, and creation; and production or revision of standard USGS products defined by formal and informal specification and standards, such as those for digital line graphs, digital elevation models, digital orthophoto quadrangles, and digital raster graphics.

  16. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    PubMed

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Geospatial Information from Satellite Imagery for Geovisualisation of Smart Cities in India

    NASA Astrophysics Data System (ADS)

    Mohan, M.

    2016-06-01

    In the recent past, there have been large emphasis on extraction of geospatial information from satellite imagery. The Geospatial information are being processed through geospatial technologies which are playing important roles in developing of smart cities, particularly in developing countries of the world like India. The study is based on the latest geospatial satellite imagery available for the multi-date, multi-stage, multi-sensor, and multi-resolution. In addition to this, the latest geospatial technologies have been used for digital image processing of remote sensing satellite imagery and the latest geographic information systems as 3-D GeoVisualisation, geospatial digital mapping and geospatial analysis for developing of smart cities in India. The Geospatial information obtained from RS and GPS systems have complex structure involving space, time and presentation. Such information helps in 3-Dimensional digital modelling for smart cities which involves of spatial and non-spatial information integration for geographic visualisation of smart cites in context to the real world. In other words, the geospatial database provides platform for the information visualisation which is also known as geovisualisation. So, as a result there have been an increasing research interest which are being directed to geospatial analysis, digital mapping, geovisualisation, monitoring and developing of smart cities using geospatial technologies. However, the present research has made an attempt for development of cities in real world scenario particulary to help local, regional and state level planners and policy makers to better understand and address issues attributed to cities using the geospatial information from satellite imagery for geovisualisation of Smart Cities in emerging and developing country, India.

  18. A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.

    1989-01-01

    Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.

  19. It's not the pixel count, you fool

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2012-01-01

    The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.

  20. Comparative Accuracy Evaluation of Fine-Scale Global and Local Digital Surface Models: The Tshwane Case Study I

    NASA Astrophysics Data System (ADS)

    Breytenbach, A.

    2016-10-01

    Conducted in the City of Tshwane, South Africa, this study set about to test the accuracy of DSMs derived from different remotely sensed data locally. VHR digital mapping camera stereo-pairs, tri-stereo imagery collected by a Pléiades satellite and data detected from the Tandem-X InSAR satellite configuration were fundamental in the construction of seamless DSM products at different postings, namely 2 m, 4 m and 12 m. The three DSMs were sampled against independent control points originating from validated airborne LiDAR data. The reference surfaces were derived from the same dense point cloud at grid resolutions corresponding to those of the samples. The absolute and relative positional accuracies were computed using well-known DEM error metrics and accuracy statistics. Overall vertical accuracies were also assessed and compared across seven slope classes and nine primary land cover classes. Although all three DSMs displayed significantly more vertical errors where solid waterbodies, dense natural and/or alien woody vegetation and, in a lesser degree, urban residential areas with significant canopy cover were encountered, all three surpassed their expected positional accuracies overall.

  1. Experimental flights using a small unmanned aircraft system for mapping emergent sandbars

    USGS Publications Warehouse

    Kinzel, Paul J.; Bauer, Mark A.; Feller, Mark R.; Holmquist-Johnson, Christopher; Preston, Todd

    2015-01-01

    The US Geological Survey and Parallel Inc. conducted experimental flights with the Tarantula Hawk (T-Hawk) unmanned aircraft system (UAS ) at the Dyer and Cottonwood Ranch properties located along reaches of the Platte River near Overton, Nebraska, in July 2013. We equipped the T-Hawk UAS platform with a consumer-grade digital camera to collect imagery of emergent sandbars in the reaches and used photogrammetric software and surveyed control points to generate orthophotographs and digital elevation models (DEMS ) of the reaches. To optimize the image alignment process, we retained and/or eliminated tie points based on their relative errors and spatial resolution, whereby minimizing the total error in the project. Additionally, we collected seven transects that traversed emergent sandbars concurrently with global positioning system location data to evaluate the accuracy of the UAS survey methodology. The root mean square errors for the elevation of emergent points along each transect across the DEMS ranged from 0.04 to 0.12 m. If adequate survey control is established, a UAS combined with photogrammetry software shows promise for accurate monitoring of emergent sandbar morphology and river management activities in short (1–2 km) river reaches.

  2. Remote sensing of deep hermatypic coral reefs in Puerto Rico and the U.S. Virgin Islands using the Seabed autonomous underwater vehicle

    NASA Astrophysics Data System (ADS)

    Armstrong, Roy A.; Singh, Hanumant

    2006-09-01

    Optical imaging of coral reefs and other benthic communities present below one attenuation depth, the limit of effective airborne and satellite remote sensing, requires the use of in situ platforms such as autonomous underwater vehicles (AUVs). The Seabed AUV, which was designed for high-resolution underwater optical and acoustic imaging, was used to characterize several deep insular shelf reefs of Puerto Rico and the US Virgin Islands using digital imagery. The digital photo transects obtained by the Seabed AUV provided quantitative data on living coral, sponge, gorgonian, and macroalgal cover as well as coral species richness and diversity. Rugosity, an index of structural complexity, was derived from the pencil-beam acoustic data. The AUV benthic assessments could provide the required information for selecting unique areas of high coral cover, biodiversity and structural complexity for habitat protection and ecosystem-based management. Data from Seabed sensors and related imaging technologies are being used to conduct multi-beam sonar surveys, 3-D image reconstruction from a single camera, photo mosaicking, image based navigation, and multi-sensor fusion of acoustic and optical data.

  3. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  4. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  5. Evaluation of Digital Camera Technology For Bridge Inspection

    DOT National Transportation Integrated Search

    1997-07-18

    As part of a cooperative agreement between the Tennessee Department of Transportation and the Federal Highway Administration, a study was conducted to evaluate current levels of digital camera and color printing technology with regard to their applic...

  6. Real-time full-motion color Flash lidar for target detection and identification

    NASA Astrophysics Data System (ADS)

    Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt

    2015-05-01

    Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.

  7. An automated SO2 camera system for continuous, real-time monitoring of gas emissions from Kīlauea Volcano's summit Overlook Crater

    USGS Publications Warehouse

    Kern, Christoph; Sutton, Jeff; Elias, Tamar; Lee, Robert Lopaka; Kamibayashi, Kevan P.; Antolik, Loren; Werner, Cynthia A.

    2015-01-01

    SO2 camera systems allow rapid two-dimensional imaging of sulfur dioxide (SO2) emitted from volcanic vents. Here, we describe the development of an SO2 camera system specifically designed for semi-permanent field installation and continuous use. The integration of innovative but largely “off-the-shelf” components allowed us to assemble a robust and highly customizable instrument capable of continuous, long-term deployment at Kīlauea Volcano's summit Overlook Crater. Recorded imagery is telemetered to the USGS Hawaiian Volcano Observatory (HVO) where a novel automatic retrieval algorithm derives SO2 column densities and emission rates in real-time. Imagery and corresponding emission rates displayed in the HVO operations center and on the internal observatory website provide HVO staff with useful information for assessing the volcano's current activity. The ever-growing archive of continuous imagery and high-resolution emission rates in combination with continuous data from other monitoring techniques provides insight into shallow volcanic processes occurring at the Overlook Crater. An exemplary dataset from September 2013 is discussed in which a variation in the efficiency of shallow circulation and convection, the processes that transport volatile-rich magma to the surface of the summit lava lake, appears to have caused two distinctly different phases of lake activity and degassing. This first successful deployment of an SO2 camera for continuous, real-time volcano monitoring shows how this versatile technique might soon be adapted and applied to monitor SO2 degassing at other volcanoes around the world.

  8. An automated SO2 camera system for continuous, real-time monitoring of gas emissions from Kīlauea Volcano's summit Overlook Crater

    NASA Astrophysics Data System (ADS)

    Kern, Christoph; Sutton, Jeff; Elias, Tamar; Lee, Lopaka; Kamibayashi, Kevan; Antolik, Loren; Werner, Cynthia

    2015-07-01

    SO2 camera systems allow rapid two-dimensional imaging of sulfur dioxide (SO2) emitted from volcanic vents. Here, we describe the development of an SO2 camera system specifically designed for semi-permanent field installation and continuous use. The integration of innovative but largely ;off-the-shelf; components allowed us to assemble a robust and highly customizable instrument capable of continuous, long-term deployment at Kīlauea Volcano's summit Overlook Crater. Recorded imagery is telemetered to the USGS Hawaiian Volcano Observatory (HVO) where a novel automatic retrieval algorithm derives SO2 column densities and emission rates in real-time. Imagery and corresponding emission rates displayed in the HVO operations center and on the internal observatory website provide HVO staff with useful information for assessing the volcano's current activity. The ever-growing archive of continuous imagery and high-resolution emission rates in combination with continuous data from other monitoring techniques provides insight into shallow volcanic processes occurring at the Overlook Crater. An exemplary dataset from September 2013 is discussed in which a variation in the efficiency of shallow circulation and convection, the processes that transport volatile-rich magma to the surface of the summit lava lake, appears to have caused two distinctly different phases of lake activity and degassing. This first successful deployment of an SO2 camera for continuous, real-time volcano monitoring shows how this versatile technique might soon be adapted and applied to monitor SO2 degassing at other volcanoes around the world.

  9. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  10. Digital dental photography. Part 4: choosing a camera.

    PubMed

    Ahmad, I

    2009-06-13

    With so many cameras and systems on the market, making a choice of the right one for your practice needs is a daunting task. As described in Part 1 of this series, a digital single reflex (DSLR) camera is an ideal choice for dental use in enabling the taking of portraits, close-up or macro images of the dentition and study casts. However, for the sake of completion, some other cameras systems that are used in dentistry are also discussed.

  11. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  12. Simulation of parafoil reconnaissance imagery

    NASA Astrophysics Data System (ADS)

    Kogler, Kent J.; Sutkus, Linas; Troast, Douglas; Kisatsky, Paul; Charles, Alain M.

    1995-08-01

    Reconnaissance from unmanned platforms is currently of interest to DoD and civil sectors concerned with drug trafficking and illegal immigration. Platforms employed vary from motorized aircraft to tethered balloons. One appraoch currently under evaluation deploys a TV camera suspended from a parafoil delivered to the area of interest by a cannon launched projectile. Imagery is then transmitted to a remote monitor for processing and interpretation. This paper presents results of imagery obtained from simulated parafoil flights in which software techniques were developed to process-in image degradation caused by atmospheric obscurants and perturbations in the normal parafoil flight trajectory induced by wind gusts. The approach to capturing continuous motion imagery from captive flight test recordings, the introduction of simulated effects, and the transfer of the processed imagery back to video tape is described.

  13. Operational Use of Remote Sensing within USDA

    NASA Technical Reports Server (NTRS)

    Bethel, Glenn R.

    2007-01-01

    A viewgraph presentation of remote sensing imagery within the USDA is shown. USDA Aerial Photography, Digital Sensors, Hurricane imagery, Remote Sensing Sources, Satellites used by Foreign Agricultural Service, Landsat Acquisitions, and Aerial Acquisitions are also shown.

  14. Thermal Imaging with Novel Infrared Focal Plane Arrays and Quantitative Analysis of Thermal Imagery

    NASA Technical Reports Server (NTRS)

    Gunapala, S. D.; Rafol, S. B.; Bandara, S. V.; Liu, J. K.; Mumolo, J. M.; Soibel, A.; Ting, D. Z.; Tidrow, Meimei

    2012-01-01

    We have developed a single long-wavelength infrared (LWIR) quantum well infrared photodetector (QWIP) camera for thermography. This camera has been used to measure the temperature profile of patients. A pixel coregistered simultaneously reading mid-wavelength infrared (MWIR)/LWIR dual-band QWIP camera was developed to improve the accuracy of temperature measurements especially with objects with unknown emissivity. Even the dualband measurement can provide inaccurate results due to the fact that emissivity is a function of wavelength. Thus we have been developing a four-band QWIP camera for accurate temperature measurement of remote object.

  15. Digital Earth Watch: Investigating the World with Digital Cameras

    NASA Astrophysics Data System (ADS)

    Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.

    2015-12-01

    Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu

  16. Derivation of high spatial resolution albedo from UAV digital imagery: application over the Greenland Ice Sheet

    NASA Astrophysics Data System (ADS)

    Ryan, Jonathan C.; Hubbard, Alun; Box, Jason E.; Brough, Stephen; Cameron, Karen; Cook, Joseph M.; Cooper, Matthew; Doyle, Samuel H.; Edwards, Arwyn; Holt, Tom; Irvine-Fynn, Tristram; Jones, Christine; Pitcher, Lincoln H.; Rennermalm, Asa K.; Smith, Laurence C.; Stibal, Marek; Snooke, Neal

    2017-05-01

    Measurements of albedo are a prerequisite for modelling surface melt across the Earth's cryosphere, yet available satellite products are limited in spatial and/or temporal resolution. Here, we present a practical methodology to obtain centimetre resolution albedo products with accuracies of 5% using consumer-grade digital camera and unmanned aerial vehicle (UAV) technologies. Our method comprises a workflow for processing, correcting and calibrating raw digital images using a white reference target, and upward and downward shortwave radiation measurements from broadband silicon pyranometers. We demonstrate the method with a set of UAV sorties over the western, K-sector of the Greenland Ice Sheet. The resulting albedo product, UAV10A1, covers 280 km2, at a resolution of 20 cm per pixel and has a root-mean-square difference of 3.7% compared to MOD10A1 and 4.9% compared to ground-based broadband pyranometer measurements. By continuously measuring downward solar irradiance, the technique overcomes previous limitations due to variable illumination conditions during and between surveys over glaciated terrain. The current miniaturization of multispectral sensors and incorporation of upward facing radiation sensors on UAV packages means that this technique will likely become increasingly attractive in field studies and used in a wide range of applications for high temporal and spatial resolution surface mapping of debris, dust, cryoconite and bioalbedo and for directly constraining surface energy balance models.

  17. Textured digital elevation model formation from low-cost UAV LADAR/digital image data

    NASA Astrophysics Data System (ADS)

    Bybee, Taylor C.; Budge, Scott E.

    2015-05-01

    Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.

  18. A data base of ASAS digital imagery. [Advanced Solid-state Array Spectroradiometer

    NASA Technical Reports Server (NTRS)

    Irons, James R.; Meeson, Blanche W.; Dabney, Philip W.; Kovalick, William M.; Graham, David W.; Hahn, Daniel S.

    1992-01-01

    The Advanced Solid-State Array Spectroradiometer (ASAS) is an airborne, off-nadir tilting, imaging spectroradiometer that acquires digital image data for 29 spectral bands in the visible and near-infrared. The sensor is used principally for studies of the bidirectional distribution of solar radiation scattered by terrestial surfaces. ASAS has acquired data for a number of terrestial ecosystem field experiments and investigators have received over 170 radiometrically corrected, multiangle, digital image data sets. A database of ASAS digital imagery has been established in the Pilot Land Data System (PLDS) at the NASA/Goddard Space Flight Center to provide access to these data by the scientific community. ASAS, its processed data, and the PLDS are described, together with recent improvements to the sensor system.

  19. A digital ISO expansion technique for digital cameras

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong

    2010-01-01

    Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.

  20. Improved detection and false alarm rejection using FLGPR and color imagery in a forward-looking system

    NASA Astrophysics Data System (ADS)

    Havens, Timothy C.; Spain, Christopher J.; Ho, K. C.; Keller, James M.; Ton, Tuan T.; Wong, David C.; Soumekh, Mehrdad

    2010-04-01

    Forward-looking ground-penetrating radar (FLGPR) has received a significant amount of attention for use in explosivehazards detection. A drawback to FLGPR is that it results in an excessive number of false detections. This paper presents our analysis of the explosive-hazards detection system tested by the U.S. Army Night Vision and Electronic Sensors Directorate (NVESD). The NVESD system combines an FLGPR with a visible-spectrum color camera. We present a target detection algorithm that uses a locally-adaptive detection scheme with spectrum-based features. The remaining FLGPR detections are then projected into the camera imagery and image-based features are collected. A one-class classifier is then used to reduce the number of false detections. We show that our proposed FLGPR target detection algorithm, coupled with our camera-based false alarm (FA) reduction method, is effective at reducing the number of FAs in test data collected at a US Army test facility.

  1. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  2. Assessing Deep Sea Communities Through Seabed Imagery

    NASA Astrophysics Data System (ADS)

    Matkin, A. G.; Cross, K.; Milititsky, M.

    2016-02-01

    The deep sea still remains virtually unexplored. Human activity, such as oil and gas exploration and deep sea mining, is expanding further into the deep sea, increasing the need to survey and map extensive areas of this habitat in order to assess ecosystem health and value. The technology needed to explore this remote environment has been advancing. Seabed imagery can cover extensive areas of the seafloor and investigate areas where sampling with traditional coring methodologies is just not possible (e.g. cold water coral reefs). Remotely operated vehicles (ROVs) are an expensive option, so drop or towed camera systems can provide a more viable and affordable alternative, while still allowing for real-time control. Assessment of seabed imagery in terms of presence, abundance and density of particular species can be conducted by bringing together a variety of analytical tools for a holistic approach. Sixteen deep sea transects located offshore West Africa were investigated with a towed digital video telemetry system (DTS). Both digital stills and video footage were acquired. An extensive data set was obtained from over 13,000 usable photographs, allowing for characterisation of the different habitats present in terms of community composition and abundance. All observed fauna were identified to the lowest taxonomic level and enumerated when possible, with densities derived after the seabed area was calculated for each suitable photograph. This methodology allowed for consistent assessment of the different habitat types present, overcoming constraints, such as specific taxa that cannot be enumerated, such as sponges, corals or bryozoans, the presence of mobile and sessile species, or the level of taxonomic detail. Although this methodology will not enable a full characterisation of a deep sea community, in terms of species composition for instance, itt will allow a robust assessment of large areas of the deep sea in terms of sensitive habitats present and community characteristics of each habitat. Such data can be readily utilised for planning and licensing purposes and be potentially revisited in the future when taxonomic resolution increases, for a more detailed characterisation or monitoring of this poorly described environment.

  3. Building a 2.5D Digital Elevation Model from 2D Imagery

    NASA Technical Reports Server (NTRS)

    Padgett, Curtis W.; Ansar, Adnan I.; Brennan, Shane; Cheng, Yang; Clouse, Daniel S.; Almeida, Eduardo

    2013-01-01

    When projecting imagery into a georeferenced coordinate frame, one needs to have some model of the geographical region that is being projected to. This model can sometimes be a simple geometrical curve, such as an ellipse or even a plane. However, to obtain accurate projections, one needs to have a more sophisticated model that encodes the undulations in the terrain including things like mountains, valleys, and even manmade structures. The product that is often used for this purpose is a Digital Elevation Model (DEM). The technology presented here generates a high-quality DEM from a collection of 2D images taken from multiple viewpoints, plus pose data for each of the images and a camera model for the sensor. The technology assumes that the images are all of the same region of the environment. The pose data for each image is used as an initial estimate of the geometric relationship between the images, but the pose data is often noisy and not of sufficient quality to build a high-quality DEM. Therefore, the source imagery is passed through a feature-tracking algorithm and multi-plane-homography algorithm, which refine the geometric transforms between images. The images and their refined poses are then passed to a stereo algorithm, which generates dense 3D data for each image in the sequence. The 3D data from each image is then placed into a consistent coordinate frame and passed to a routine that divides the coordinate frame into a number of cells. The 3D points that fall into each cell are collected, and basic statistics are applied to determine the elevation of that cell. The result of this step is a DEM that is in an arbitrary coordinate frame. This DEM is then filtered and smoothed in order to remove small artifacts. The final step in the algorithm is to take the initial DEM and rotate and translate it to be in the world coordinate frame [such as UTM (Universal Transverse Mercator), MGRS (Military Grid Reference System), or geodetic] such that it can be saved in a standard DEM format and used for projection.

  4. [Intra-oral digital photography with the non professional camera--simplicity and effectiveness at a low price].

    PubMed

    Sackstein, M

    2006-10-01

    Over the last five years digital photography has become ubiquitous. For the family photo album, a 4 or 5 megapixel camera costing about 2000 NIS will produce satisfactory results for most people. However, for intra-oral photography the common wisdom holds that only professional photographic equipment is up to the task. Such equipment typically costs around 12,000 NIS and includes the camera body, an attachable macro lens and a ringflash. The following article challenges this conception. Although professional equipment does produce the most exemplary results, a highly effective database of clinical pictures can be compiled even with a "non-professional" digital camera. Since the year 2002, my clinical work has been routinely documented with digital cameras of the Nikon CoolPix series. The advantages are that these digicams are economical both in price and in size and allow easy transport and operation when compared to their expensive and bulky professional counterparts. The details of how to use a non-professional digicam to produce and maintain an effective clinical picture database, for documentation, monitoring, demonstration and professional fulfillment, are described below.

  5. PRo3D®: A Tool for High Resolution Rendering and Geological Analysis of Martian Rover-Derived Digital Outcrop Models.

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Barnes, R.; Ortner, T.; Huber, B.; Paar, G.; Muller, J. P.; Giordano, M.; Willner, K.; Traxler, C.; Juhart, K.; Fritz, L.; Hesina, G.; Tasdelen, E.

    2015-12-01

    NASA's Mars Exploration Rovers (MER) and Mars Science Laboratory Curiosity Rover (MSL) are proxies for field geologists on Mars, taking high resolution imagery of rock formations and landscapes which is analysed in detail on Earth. Panoramic digital cameras (PanCam on MER and MastCam on MSL) are used for characterising the geology of rock outcrops along rover traverses. A key focus is on sedimentary rocks that have the potential to contain evidence for ancient life on Mars. Clues to determine ancient sedimentary environments are preserved in layer geometries, sedimentary structures and grain size distribution. The panoramic camera systems take stereo images which are co-registered to create 3D point clouds of rock outcrops to be quantitatively analysed much like geologists would do on Earth. The EU FP7 PRoViDE project is compiling all Mars rover vision data into a database accessible through a web-GIS (PRoGIS) and 3D viewer (PRo3D). Stereo-imagery selected in PRoGIS can be rendered in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features. Dip and strike is calculated within PRo3D from mapped bedding contacts and fracture traces. Results from multiple outcrops can be integrated in PRoGIS to gain a detailed understanding of the geological features within an area. These tools have been tested on three case studies; Victoria Crater, Yellowknife Bay and Shaler. Victoria Crater, in the Meridiani Planum region of Mars, was visited by the MER-B Opportunity Rover. Erosional widening of the crater produced <15 m high outcrops which expose ancient Martian eolian bedforms. Yellowknife Bay and Shaler were visited in the early stages of the MSL mission, and provide excellent opportunities to characterise Martian fluvio-lacustrine sedimentary features. Development of these tools is crucial to exploitation of vision data from future missions, such as the 2018 ExoMars Rover and the NASA 2020 mission. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE.

  6. Comparison between different cost devices for digital capture of X-ray films: an image characteristics detection approach.

    PubMed

    Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés

    2012-02-01

    A common teleradiology practice is digitizing films. The costs of specialized digitizers are very high, that is why there is a trend to use conventional scanners and digital cameras. Statistical clinical studies are required to determine the accuracy of these devices, which are very difficult to carry out. The purpose of this study was to compare three capture devices in terms of their capacity to detect several image characteristics. Spatial resolution, contrast, gray levels, and geometric deformation were compared for a specialized digitizer ICR (US$ 15,000), a conventional scanner UMAX (US$ 1,800), and a digital camera LUMIX (US$ 450, but require an additional support system and a light box for about US$ 400). Test patterns printed in films were used. The results detected gray levels lower than real values for all three devices; acceptable contrast and low geometric deformation with three devices. All three devices are appropriate solutions, but a digital camera requires more operator training and more settings.

  7. Digital photography for the light microscope: results with a gated, video-rate CCD camera and NIH-image software.

    PubMed

    Shaw, S L; Salmon, E D; Quatrano, R S

    1995-12-01

    In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.

  8. Digital Forensics Using Local Signal Statistics

    ERIC Educational Resources Information Center

    Pan, Xunyu

    2011-01-01

    With the rapid growth of the Internet and the popularity of digital imaging devices, digital imagery has become our major information source. Meanwhile, the development of digital manipulation techniques employed by most image editing software brings new challenges to the credibility of photographic images as the definite records of events. We…

  9. Designing for Diverse Classrooms: Using iPpads and Digital Cameras to Compose eBooks with Emergent Bilingual/Biliterate Four-Year-Olds

    ERIC Educational Resources Information Center

    Rowe, Deborah Wells; Miller, Mary E.

    2016-01-01

    This paper reports the findings of a two-year design study exploring instructional conditions supporting emerging, bilingual/biliterate, four-year-olds' digital composing. With adult support, children used child-friendly, digital cameras and iPads equipped with writing, drawing and bookmaking apps to compose multimodal, multilingual eBooks…

  10. 2010 A Digital Odyssey: Exploring Document Camera Technology and Computer Self-Efficacy in a Digital Era

    ERIC Educational Resources Information Center

    Hoge, Robert Joaquin

    2010-01-01

    Within the sphere of education, navigating throughout a digital world has become a matter of necessity for the developing professional, as with the advent of Document Camera Technology (DCT). This study explores the pedagogical implications of implementing DCT; to see if there is a relationship between teachers' comfort with DCT and to the…

  11. Digital Diversity: A Basic Tool with Lots of Uses

    ERIC Educational Resources Information Center

    Coy, Mary

    2006-01-01

    In this article the author relates how the digital camera has altered the way she teaches and the way her students learn. She also emphasizes the importance for teachers to have software that can edit, print, and incorporate photos. She cites several instances in which a digital camera can be used: (1) PowerPoint presentations; (2) Open house; (3)…

  12. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  13. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  14. Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.

    2000-01-01

    This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.

  15. Mosaicked Historic Airborne Imagery from Seward Peninsula, Alaska, Starting in the 1950's

    DOE Data Explorer

    Cherry, Jessica; Wirth, Lisa

    2016-12-06

    Historical airborne imagery for each Seward Peninsula NGEE Arctic site - Teller, Kougarok, Council - with multiple years for each site. This dataset includes mosaicked, geolocated and, where possible, orthorectified, historic airborne and recent satellite imagery. The older photos were sourced from USGS's Earth Explorer site and the newer, satellite imagery is from the Statewide Digital Mapping Initiative (SDMI) project managed by the Geographic Information Network of Alaska on behalf of the state of Alaska.

  16. Integrating TV/digital data spectrograph system

    NASA Technical Reports Server (NTRS)

    Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.

    1975-01-01

    A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.

  17. Printed products for digital cameras and mobile devices

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2005-01-01

    Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.

  18. Modeling of digital information optical encryption system with spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.

    2015-10-01

    State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.

  19. Toward a digital camera to rival the human eye

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2011-07-01

    All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.

  20. Automatic, Satellite-Linked "Webcams" as a Tool in Ice-Shelf and Iceberg Research.

    NASA Astrophysics Data System (ADS)

    Ross, R.; Okal, M. H.; Thom, J. E.; Macayeal, D. R.

    2004-12-01

    Important dynamic events governing the behavior of ice shelves and icebergs are episodic in time and small in scale, making them difficult to observe. Traditional satellite imagery is acquired on a rigid schedule with coarse spatial resolution and this means that collisions between icebergs or the processes which create ice "mélange" that fills detachment rifts leading to ice-shelf calving, to give examples, cannot be readily observed. To overcome the temporal and spatial gaps in traditional remote sensing, we have deployed cameras at locations in Antarctica where research is conducted on the calving and subsequent evolution of icebergs. One camera is located at the edge of iceberg C16 in the Ross Sea, and is positioned to capture visual imagery of collisions between C16 and neighboring B15A. The second camera is located within the anticipated detachment rift of a "nascent" iceberg on the Ross Ice Shelf. The second camera is positioned to capture visual imagery of the rift's propagation and the in-fill of ice mélange, which constrains the mechanical influence of such rifts on the surrounding ice shelf. Both cameras are designed for connection to the internet (hence are referred to as "webcams") and possess variable image qualities and image-control technology. The cameras are also connected to data servers via the Iridium satellite telephone network and produce a daily image that is transmitted to the internet through the Iridium connection. Results of the initial trial deployments will be presented as a means of assessing both the techniques involved and the value of the scientific information acquired by these webcams. In the case of the iceberg webcam, several collisions between B15A and C16 were monitored over the period between January, 2003 and December, 2004. The time-lapse imagery obtained through this period showed giant "push mounds" of damaged firn on the edge and surface of the icebergs within the zones of contact as a consequence of the collisions. The push mounds were subsequently unstable, and calved as small scale ice debris soon after the collision, thereby returning the iceberg edge to a clean, vertical cliff-like appearance. A correlation between the iceberg collision record available from the webcam and data from a seismometer located on C16 is anticipated once the seismometer data is recovered. The webcam associated with the detachment rift of the nascent iceberg on the Ross Ice Shelf is planned to be deployed in early November, 2004. If results are available from this deployment, they too will be discussed.

  1. Analysis of ArcticDEM orthorectification for polar navigational traverses

    NASA Astrophysics Data System (ADS)

    Menio, E. C.; Deeb, E. J.; Weale, J.; Courville, Z.; Tracy, B.; Cloutier, M. D.; Cothren, J. D.; Liu, J.

    2017-12-01

    The availability and accessibility of high-resolution satellite imagery allows operational support teams to visually assess physical risks along traverse routes before and during the field season. In support of operations along the Greenland Inland Traverse (GrIT), DigitalGlobe's WorldView 0.5m resolution panchromatic imagery is analyzed to identify and digitize crevasse features along the route from Thule Air Force Base to Summit Station, Greenland. In the spring of 2016, field teams reported up to 150 meters of offset between the location of crevasse features on the ground and the location of the same feature on the imagery provided. Investigation into this issue identified the need to orthorectify imagery—use digital elevation models (DEMs) to correct viewing geometry distortions—to improve navigational accuracy in the field. It was previously thought that orthorectification was not necessary for applications in relatively flat terrain such as ice sheets. However, the surface elevations on the margins of the Greenland Ice Sheet vary enough to cause distortions in imagery, if taken obliquely. As is standard for requests, the Polar Geospatial Center (PGC) provides orthorectified imagery using the MEaSUREs Greenland Ice Mapping Project (GIMP) 30m digital elevation model. Current, higher-resolution elevation datasets, such as the ArcticDEM (2-5m resolution) and WorldView stereopair DEMs (2-3m resolution), are available for use in orthorectification. This study examines three heavily crevassed areas along the GrIT traverse, as identified in 2015 and 2016 imagery. We extracted elevation profiles along the GrIT route from each of the three DEMs: GIMP, ArcticDEM, and WorldView stereopair mosaic. Results show the courser GIMP data deviating significantly from the ArcticDEM and WorldView data, at points by up to 80m, which is seen as offset of features in plan view. In-situ Ground Penetrating Radar (GPR) surveys of crevasse crossings allow for evaluation of geopositional accuracy of each resulting orthorectified photo and a quantitative analysis of plan view offset.

  2. Image Analysis and Classification Based on Soil Strength

    DTIC Science & Technology

    2016-08-01

    Satellite imagery classification is useful for a variety of commonly used ap- plications, such as land use classification, agriculture , wetland...required use of a coinci- dent digital elevation model (DEM) and a high-resolution orthophoto- graph collected by the National Agriculture Imagery Program...14. ABSTRACT Satellite imagery classification is useful for a variety of commonly used applications, such as land use classification, agriculture

  3. Integration of hard copy and soft copy exploitation

    NASA Astrophysics Data System (ADS)

    Fultz, Roy C., Jr.

    1996-11-01

    Exploitation of remotely sensed and aerially derived imagery has, in the past, been primarily performed through the use of analog light tables, by displaying individual pieces or rolls of imagery over a brightly lit surface to allow light through the nonopaque surface of the film medium. The interpreter would then peer through optical viewing scopes allowing him (or her) to analyze the imagery. Over the course of the last two decades, digital data, or as it is better known, "softcopy imagery," has for many become the desired path which technology has dictated. Softcopy imagery offers many benefits, such as the ability to manipulate imagery in ways analog workstations cannot and were never designed to do. Functions which can be performed on softcopy imagery are endless and growing constantly: image spatial rectification, pixel manipulation, image contrast, and brightness enhancements. All are performed by the running of algorithmic equations to manipulate the digital data. It has become evident that in the future a large portion of imagery analysis will be performed by softcopy. However, studies indicate that aerial imagery will continue to be acquired via hardcopy means for many civil, educational, and commercial applications in the foreseeable future, making it clear that any large scale transformation from hardcopy to softcopy will not be feasible for a long time to come. A major issue dictating the slow-down in this transition is the over 35 years of hardcopy imagery archived and housed in facilities throughout the world, including the recently declassified "Corona" satellite imagery which will provide a wealth of hardcopy data for use by ecologists and conservationists. Yes, the technology to transfer hardcopy to softcopy exists, but the time and cost required to complete this task would be phenomenal and, in many cases, when digitization and storage become affordable, it still may prove beneficial to retain the imagery in a hardcopy form for retention of the highest quality resolution. An analogy which I feel best portrays this dilemma is the automobile-eventually all automobiles will be electric or hydrogen driven but the time and cost involved in the transformation predicts a slow progression. Since a predominate amount of imagery analysis, especially in the intelligence community, is the comparison ofnew imagery data to that of archived imagery in order to detect changes or to monitor progressions, it is conceivable that the majority of imagery analysts will be using a combination of hardcopy and softcopy workstations in order to facilitate analysis. The incorporation of hardcopy and softcopy functions into one workstation is the most cost effective and time essential means in which in-depth analysis can be performed.

  4. International Space Station Instmments Collect Imagery of Natural Disasters

    NASA Technical Reports Server (NTRS)

    Evans, C. A.; Stefanov, W. L.

    2013-01-01

    A new focus for utilization of the International Space Station (ISS) is conducting basic and applied research that directly benefits Earth's citizenry. In the Earth Sciences, one such activity is collecting remotely sensed imagery of disaster areas and making those data immediately available through the USGS Hazards Data Distribution System, especially in response to activations of the International Charter for Space and Major Disasters (known informally as the "International Disaster Charter", or IDC). The ISS, together with other NASA orbital sensor assets, responds to IDC activations following notification by the USGS. Most of the activations are due to natural hazard events, including large floods, impacts of tropical systems, major fires, and volcanic eruptions and earthquakes. Through the ISS Program Science Office, we coordinate with ISS instrument teams for image acquisition using several imaging systems. As of 1 August 2013, we have successfully contributed imagery data in support of 14 Disaster Charter Activations, including regions in both Haiti and the east coast of the US impacted by Hurricane Sandy; flooding events in Russia, Mozambique, India, Germany and western Africa; and forest fires in Algeria and Ecuador. ISS-based sensors contributing data include the Hyperspectral Imager for the Coastal Ocean (HICO), the ISERV (ISS SERVIR Environmental Research and Visualization System) Pathfinder camera mounted in the US Window Observational Research Facility (WORF), the ISS Agricultural Camera (ISSAC), formerly operating from the WORF, and high resolution handheld camera photography collected by crew members (Crew Earth Observations). When orbital parameters and operations support data collection, ISS-based imagery adds to the resources available to disaster response teams and contributes to the publicdomain record of these events for later analyses.

  5. EXPERIMENTS IN LITHOGRAPHY FROM REMOTE SENSOR IMAGERY.

    USGS Publications Warehouse

    Kidwell, R. H.; McSweeney, J.; Warren, A.; Zang, E.; Vickers, E.

    1983-01-01

    Imagery from remote sensing systems such as the Landsat multispectral scanner and return beam vidicon, as well as synthetic aperture radar and conventional optical camera systems, contains information at resolutions far in excess of that which can be reproduced by the lithographic printing process. The data often require special handling to produce both standard and special map products. Some conclusions have been drawn regarding processing techniques, procedures for production, and printing limitations.

  6. Content-based image exploitation for situational awareness

    NASA Astrophysics Data System (ADS)

    Gains, David

    2008-04-01

    Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.

  7. Teaching with Technology: Step Back and Hand over the Cameras! Using Digital Cameras to Facilitate Mathematics Learning with Young Children in K-2 Classrooms

    ERIC Educational Resources Information Center

    Northcote, Maria

    2011-01-01

    Digital cameras are now commonplace in many classrooms and in the lives of many children in early childhood centres and primary schools. They are regularly used by adults and teachers for "saving special moments and documenting experiences." The use of previously expensive photographic and recording equipment has often remained in the domain of…

  8. A stereoscopic lens for digital cinema cameras

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  9. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  10. A direct-view customer-oriented digital holographic camera

    NASA Astrophysics Data System (ADS)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  11. High-speed line-scan camera with digital time delay integration

    NASA Astrophysics Data System (ADS)

    Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.

  12. Real-time person detection in low-resolution thermal infrared imagery with MSER and CNNs

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Müller, Thomas; Willersinn, Dieter; Beyerer, Jürgen

    2016-10-01

    In many camera-based systems, person detection and localization is an important step for safety and security applications such as search and rescue, reconnaissance, surveillance, or driver assistance. Long-wave infrared (LWIR) imagery promises to simplify this task because it is less affected by background clutter or illumination changes. In contrast to a lot of related work, we make no assumptions about any movement of persons or the camera, i.e. persons may stand still and the camera may move or any combination thereof. Furthermore, persons may appear arbitrarily in near or far distances to the camera leading to low-resolution persons in far distances. To address this task, we propose a two-stage system, including a proposal generation method and a classifier to verify, if the detected proposals really are persons. In contradiction to use all possible proposals as with sliding window approaches, we apply Maximally Stable Extremal Regions (MSER) and classify the detected proposals afterwards with a Convolutional Neural Network (CNN). The MSER algorithm acts as a hot spot detector when applied to LWIR imagery. Because the body temperature of persons is usually higher than the background, they appear as hot spots in the image. However, the MSER algorithm is unable to distinguish between different kinds of hot spots. Thus, all further LWIR sources such as windows, animals or vehicles will be detected, too. Still by applying MSER, the number of proposals is reduced significantly in comparison to a sliding window approach which allows employing the high discriminative capabilities of deep neural networks classifiers that were recently shown in several applications such as face recognition or image content classification. We suggest using a CNN as classifier for the detected hot spots and train it to discriminate between person hot spots and all further hot spots. We specifically design a CNN that is suitable for the low-resolution person hot spots that are common with LWIR imagery applications and is capable of fast classification. Evaluation on several different LWIR person detection datasets shows an error rate reduction of up to 80 percent compared to previous approaches consisting of MSER, local image descriptors and a standard classifier such as an SVM or boosted decision trees. Further time measurements show that the proposed processing chain is capable of real-time person detection in LWIR camera streams.

  13. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  14. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  15. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.

  16. The influence of the in situ camera calibration for direct georeferencing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Barrios, R.; Centeno, J.

    2014-11-01

    The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs from the in situ camera calibration improve significantly the accuracies of the direct georeferencing. The obtained results from the experiments are shown and discussed.

  17. Structural geologic interpretations from radar imagery

    USGS Publications Warehouse

    Reeves, Robert G.

    1969-01-01

    Certain structural geologic features may be more readily recognized on sidelooking airborne radar (SLAR) images than on conventional aerial photographs, other remote sensor imagery, or by ground observations. SLAR systems look obliquely to one or both sides and their images resemble aerial photographs taken at low sun angle with the sun directly behind the camera. They differ from air photos in geometry, resolution, and information content. Radar operates at much lower frequencies than the human eye, camera, or infrared sensors, and thus "sees" differently. The lower frequency enables it to penetrate most clouds and some precipitation, haze, dust, and some vegetation. Radar provides its own illumination, which can be closely controlled in intensity and frequency. It is narrow band, or essentially monochromatic. Low relief and subdued features are accentuated when viewed from the proper direction. Runs over the same area in significantly different directions (more than 45° from each other), show that images taken in one direction may emphasize features that are not emphasized on those taken in the other direction; optimum direction is determined by those features which need to be emphasized for study purposes. Lineaments interpreted as faults stand out on radar imagery of central and western Nevada; folded sedimentary rocks cut by faults can be clearly seen on radar imagery of northern Alabama. In these areas, certain structural and stratigraphic features are more pronounced on radar images than on conventional photographs; thus radar imagery materially aids structural interpretation.

  18. Digital Image Support in the ROADNet Real-time Monitoring Platform

    NASA Astrophysics Data System (ADS)

    Lindquist, K. G.; Hansen, T. S.; Newman, R. L.; Vernon, F. L.; Nayak, A.; Foley, S.; Fricke, T.; Orcutt, J.; Rajasekar, A.

    2004-12-01

    The ROADNet real-time monitoring infrastructure has allowed researchers to integrate geophysical monitoring data from a wide variety of signal domains. Antelope-based data transport, relational-database buffering and archiving, backup/replication/archiving through the Storage Resource Broker, and a variety of web-based distribution tools create a powerful monitoring platform. In this work we discuss our use of the ROADNet system for the collection and processing of digital image data. Remote cameras have been deployed at approximately 32 locations as of September 2004, including the SDSU Santa Margarita Ecological Reserve, the Imperial Beach pier, and the Pinon Flats geophysical observatory. Fire monitoring imagery has been obtained through a connection to the HPWREN project. Near-real-time images obtained from the R/V Roger Revelle include records of seafloor operations by the JASON submersible, as part of a maintenance mission for the H2O underwater seismic observatory. We discuss acquisition mechanisms and the packet architecture for image transport via Antelope orbservers, including multi-packet support for arbitrarily large images. Relational database storage supports archiving of timestamped images, image-processing operations, grouping of related images and cameras, support for motion-detect triggers, thumbnail images, pre-computed video frames, support for time-lapse movie generation and storage of time-lapse movies. Available ROADNet monitoring tools include both orbserver-based display of incoming real-time images and web-accessible searching and distribution of images and movies driven by the relational database (http://mercali.ucsd.edu/rtapps/rtimbank.php). An extension to the Kepler Scientific Workflow System also allows real-time image display via the Ptolemy project. Custom time-lapse movies may be made from the ROADNet web pages.

  19. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.

  20. Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery

    NASA Astrophysics Data System (ADS)

    Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.

    2017-12-01

    Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being organized to collect distributed cloud data sets suitable for MODIS-CERES cloud radiation science and solar forecasting algorithm development. A low-cost and robust sensor design suitable for large scale fabrication and long term deployment has been developed during the project prototyping phase.

  1. Meteor Film Recording with Digital Film Cameras with large CMOS Sensors

    NASA Astrophysics Data System (ADS)

    Slansky, P. C.

    2016-12-01

    In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.

  2. Forensics for flatbed scanners

    NASA Astrophysics Data System (ADS)

    Gloe, Thomas; Franz, Elke; Winkler, Antje

    2007-02-01

    Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.

  3. Land-based infrared imagery for marine mammal detection

    NASA Astrophysics Data System (ADS)

    Graber, Joseph; Thomson, Jim; Polagye, Brian; Jessup, Andrew

    2011-09-01

    A land-based infrared (IR) camera is used to detect endangered Southern Resident killer whales in Puget Sound, Washington, USA. The observations are motivated by a proposed tidal energy pilot project, which will be required to monitor for environmental effects. Potential monitoring methods also include visual observation, passive acoustics, and active acoustics. The effectiveness of observations in the infrared spectrum is compared to observations in the visible spectrum to assess the viability of infrared imagery for cetacean detection and classification. Imagery was obtained at Lime Kiln Park, Washington from 7/6/10-7/9/10 using a FLIR Thermovision A40M infrared camera (7.5-14μm, 37°HFOV, 320x240 pixels) under ideal atmospheric conditions (clear skies, calm seas, and wind speed 0-4 m/s). Whales were detected during both day (9 detections) and night (75 detections) at distances ranging from 42 to 162 m. The temperature contrast between dorsal fins and the sea surface ranged from 0.5 to 4.6 °C. Differences in emissivity from sea surface to dorsal fin are shown to aid detection at high incidence angles (near grazing). A comparison to theory is presented, and observed deviations from theory are investigated. A guide for infrared camera selection based on site geometry and desired target size is presented, with specific considerations regarding marine mammal detection. Atmospheric conditions required to use visible and infrared cameras for marine mammal detection are established and compared with 2008 meteorological data for the proposed tidal energy site. Using conservative assumptions, infrared observations are predicted to provide a 74% increase in hours of possible detection, compared with visual observations.

  4. Quantification of shoreline change along Hatteras Island, North Carolina: Oregon Inlet to Cape Hatteras, 1978-2002, and associated vector shoreline data

    USGS Publications Warehouse

    Hapke, Cheryl J.; Henderson, Rachel E.

    2015-01-01

    Shoreline change spanning twenty-four years was assessed along the coastline of Cape Hatteras National Seashore, at Hatteras Island, North Carolina. The shorelines used in the analysis were generated from georeferenced historical aerial imagery and are used to develop shoreline change rates for Hatteras Island, from Oregon Inlet to Cape Hatteras. A total of 14 dates of aerial photographs ranging from 1978 through 2002 were obtained from the U.S. Army Corp of Engineers Field Research Facility in Duck, North Carolina, and scanned to generate digital imagery. The digital imagery was georeferenced and high water line shorelines (interpreted from the wet/dry line) were digitized from each date to produce a time series of shorelines for the study area. Rates of shoreline change were calculated for three periods: the full span of the time series, 1978 through 2002, and two approximately decadal subsets, 1978–89 and 1989–2002.

  5. Data to Pictures to Data: Outreach Imaging Software and Metadata

    NASA Astrophysics Data System (ADS)

    Levay, Z.

    2011-07-01

    A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.

  6. Quickbird Geometry Report for Summer 2003

    NASA Technical Reports Server (NTRS)

    Darbha, Ravikanth; Helder, Dennis; Choi, Taeyoung

    2005-01-01

    Digital Globe provides for general use 2.4 m multi-spectral and 0.7 m panchromatic imagery acquired by the Quickbird satellite. This geometrically corrected imagery was obtained as standard and orthorectified products; the difference between the two products is primarily in the degree of geometric accuracy that Digital Globe claims. For both products, every image pixel contains estimated sets of Northing/Easting and lat/long coordinates accessible through an image display application such as ENVI. Ground processing was performed by Digital Globe using the ADP 2.1 version of their system. Analysis conducted at South Dakota State University attempted to verify the geometric accuracy of standard and orthorectified Quickbird imagery to determine if specifications for the NASA Science Data Purchase (SDP) were met. These specifications are in Table 1 of Appendix 1. In this analysis, we had approximately 90 Ground Control Points (varies depending on scene size on each date), uniformly distributed over the Brookings, SD, area, from 4 Quickbird scenes acquired August 23, September 15, and October 21 of 2003.

  7. Monitoring the vernal advancement and retrogradation (green wave effect) of natural vegetation

    NASA Technical Reports Server (NTRS)

    Rouse, J. W., Jr. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Preliminary evaluation of autumnal phase ground truth data suggests that the sampling procedures at the Great Plains Corridor network test sites are adequate to show relatively small temporal changes in above-ground vegetation biomass and vegetation condition. Vegetation changes measured August through December, reflect grazing intensity and environmental conditions at the test sites. Preliminary analysis of black and white imagery suggests that detail in vegetation patterns is much greater than originally anticipated. A first look analysis of single band imagery and digital data at two locations shows that woodland, grassland, and cropland areas are easily delineated. Computer derived grey-scale maps from MSS digital data were shown to be useful in identifying the location of small fields and features of the natural and cultivated lands. Single band imagery and digital data are believed to have important application for synoptic land use mapping and inventory. Initial ratio analysis, using band 5 and 7 data, suggests the applicability in the greenness of a vegetative scene.

  8. Land cover/use classification of Cairns, Queensland, Australia: A remote sensing study involving the conjunctive use of the airborne imaging spectrometer, the large format camera and the thematic mapper simulator

    NASA Technical Reports Server (NTRS)

    Heric, Matthew; Cox, William; Gordon, Daniel K.

    1987-01-01

    In an attempt to improve the land cover/use classification accuracy obtainable from remotely sensed multispectral imagery, Airborne Imaging Spectrometer-1 (AIS-1) images were analyzed in conjunction with Thematic Mapper Simulator (NS001) Large Format Camera color infrared photography and black and white aerial photography. Specific portions of the combined data set were registered and used for classification. Following this procedure, the resulting derived data was tested using an overall accuracy assessment method. Precise photogrammetric 2D-3D-2D geometric modeling techniques is not the basis for this study. Instead, the discussion exposes resultant spectral findings from the image-to-image registrations. Problems associated with the AIS-1 TMS integration are considered, and useful applications of the imagery combination are presented. More advanced methodologies for imagery integration are needed if multisystem data sets are to be utilized fully. Nevertheless, research, described herein, provides a formulation for future Earth Observation Station related multisensor studies.

  9. Use of UAVs for Remote Measurement of Vegetation Canopy Variables

    NASA Astrophysics Data System (ADS)

    Rango, A.; Laliberte, A.; Herrick, J.; Steele, C.; Bestelmeyer, B.; Chopping, M. J.

    2006-12-01

    Remote sensing with different sensors has proven useful for measuring vegetation canopy variables at scales ranging from landscapes down to individual plants. For use at landscape scales, such as desert grasslands invaded by shrubs, it is possible to use multi-angle imagery from satellite sensors, such as MISR and CHRIS/Proba, with geometric optical models to retrieve fractional woody plant cover. Vegetation community states can be mapped using visible and near infrared ASTER imagery at 15 m resolution. At finer scales, QuickBird satellite imagery with approximately 60 cm resolution and piloted aircraft photography with 25-80 cm resolution can be used to measure shrubs above a critical size. Tests conducted with the QuickBird data in the Jornada basin of southern New Mexico have shown that 87% of all shrubs greater than 2 m2 were detected whereas only about 29% of all shrubs less than 2 m2 were detected, even at these high resolutions. Because there is an observational gap between satellite/aircraft measurements and ground observations, we have experimented with Unmanned Aerial Vehicles (UAVs) producing digital photography with approximately 5 cm resolution. We were able to detect all shrubs greater than 2 m2, and we were able to map small subshrubs indicative of rangeland deterioration, as well as remnant grass patches, for the first time. None of these could be identified on the 60 cm resolution data. Additionally, we were able to measure canopy gaps, shrub patterns, percent bare soil, and vegetation cover over mixed rangeland vegetation. This approach is directly applicable to rangeland health monitoring, and it provides a quantitative way to assess shrub invasion over time and to detect the depletion or recovery of grass patches. Further, if the UAV images have sufficient overlap, it may be possible to exploit the stereo viewing capabilities to develop a digital elevation model from the orthophotos, with a potential for extracting canopy height. We envision two parallel routes for investigation: one which emphasizes utilization of the most technically advanced passive and active space and aircraft sensors (e.g., LIDAR, radar, Hyperion, ASTER, QuickBird follow-on) for modeling research, and a second which emphasizes minimization of costs and maximization of simplicity for monitoring purposes utilizing inexpensive sensors such as digital cameras on UAVs for arid and semiarid rangelands. The use of UAVs will provide management agencies a way to assess various vegetation canopy variables for a very reasonable cost.

  10. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  11. An assessment of the utility of a non-metric digital camera for measuring standing trees

    Treesearch

    Neil Clark; Randolph H. Wynne; Daniel L. Schmoldt; Matthew F. Winn

    2000-01-01

    Images acquired with a commercially available digital camera were used to make measurements on 20 red oak (Quercus spp.) stems. The ranges of diameter at breast height (DBH) and height to a 10 cm upper-stem diameter were 16-66 cm and 12-20 m, respectively. Camera stations located 3, 6, 9, 12, and 15 m from the stem were studied to determine the best distance to be...

  12. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  13. Estimation of spectral distribution of sky radiance using a commercial digital camera.

    PubMed

    Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao

    2016-01-10

    Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.

  14. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  15. Field-based Digital Mapping of the November 3, 2002 Susitna Glacier Fault Rupture - Integrating remotely sensed data, GIS, and photo-linking technologies

    NASA Astrophysics Data System (ADS)

    Staft, L. A.; Craw, P. A.

    2003-12-01

    In July 2003, the U.S. Geological Survey and the Alaska Division of Geological & Geophysical Surveys (DGGS) conducted field studies on the Susitna Glacier Fault (SGF), which ruptured on November 2002 during the M 7.9 Denali fault earthquake. The DGGS assumed responsibility for Geographic Information System (GIS) and data management, integrating remotely sensed imagery, GPS data, GIS, and photo-linking software to aid in planning and documentation of fieldwork. Pre-field preparation included acquisition of over 150, 1:6,000-scale true-color aerial photographs taken shortly after the SGF rupture, 1:63,360-scale color-infrared (CIR) 1980 aerial photographs, and digital geographic information including a 15-minute Digital Elevation Model (DEM), 1:63,360-scale Digital Raster Graphics (DRG), and LandSat 7 satellite imagery. Using Orthomapper software, we orthorectified and mosaiced seven CIRs, creating a georeferenced, digital photo base of the study area. We used this base to reference the 1:6,000-scale aerial photography, to view locations of field sites downloaded from GPS, and to locate linked digital photographs that were taken in the field. Photos were linked using GPS-Photo Link software which "links" digital photographs to GPS data by correlating time stamps from the GPS track log or waypoint file to those of the digital photos, using the correlated point data to create a photo location ESRI shape file. When this file is opened in ArcMap or ArcView with the GPS-Photo Link utility enabled, a thumbnail image of the linked photo appears when the cursor is over the photo location. Viewing photographed features and scarp-profile locations in GIS allowed us to evaluate data coverage of the rupture daily. Using remotely sensed imagery in the field with GIS gave us the versatility to display data on a variety of bases, including topographic maps, air photos, and satellite imagery, during fieldwork. In the field, we downloaded, processed, and reviewed data as it was collected, taking major steps toward final digital map production. Using the described techniques greatly enhanced our ability to analyze and interpret field data; the resulting digital data structure allows us to efficiently gather, disseminate, and archive critical field data.

  16. Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications

    DTIC Science & Technology

    2013-06-01

    Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable

  17. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  18. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  19. The National Map - Orthoimagery

    USGS Publications Warehouse

    Mauck, James; Brown, Kim; Carswell, William J.

    2009-01-01

    Orthorectified digital aerial photographs and satellite images of 1-meter (m) pixel resolution or finer make up the orthoimagery component of The National Map. The process of orthorectification removes feature displacements and scale variations caused by terrain relief and sensor geometry. The result is a combination of the image characteristics of an aerial photograph or satellite image and the geometric qualities of a map. These attributes allow users to: *Measure distance *Calculate areas *Determine shapes of features *Calculate directions *Determine accurate coordinates *Determine land cover and use *Perform change detection *Update maps The standard digital orthoimage is a 1-m or finer resolution, natural color or color infra-red product. Most are now produced as GeoTIFFs and accompanied by a Federal Geographic Data Committee (FGDC)-compliant metadata file. The primary source for 1-m data is the National Agriculture Imagery Program (NAIP) leaf-on imagery. The U.S. Geological Survey (USGS) utilizes NAIP imagery as the image layer on its 'Digital- Map' - a new generation of USGS topographic maps (http://nationalmap.gov/digital_map). However, many Federal, State, and local governments and organizations require finer resolutions to meet a myriad of needs. Most of these images are leaf-off, natural-color products at resolutions of 1-foot (ft) or finer.

  20. Digital Camera Project Fosters Communication Skills

    ERIC Educational Resources Information Center

    Fisher, Ashley; Lazaros, Edward J.

    2009-01-01

    This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…

  1. Influence of coolant tube curvature on film cooling effectiveness as detected by infrared imagery

    NASA Technical Reports Server (NTRS)

    Papell, S. S.; Graham, R. W.; Cageao, R. P.

    1979-01-01

    Thermal film cooling footprints observed by infrared imagery from straight, curved, and looped coolant tube geometries are compared. It was hypothesized that the differences in secondary flow and in the turbulence structure of flow through these three tubes should influence the mixing properties between the coolant and the main stream. A flow visualization tunnel, an infrared camera and detector, and a Hilsch tube were employed to test the hypothesis.

  2. Airborne and Ground-Based Platforms for Data Collection in Small Vineyards: Examples from the UK and Switzerland

    NASA Astrophysics Data System (ADS)

    Green, David R.; Gómez, Cristina; Fahrentrapp, Johannes

    2015-04-01

    This paper presents an overview of some of the low-cost ground and airborne platforms and technologies now becoming available for data collection in small area vineyards. Low-cost UAV or UAS platforms and cameras are now widely available as the means to collect both vertical and oblique aerial still photography and airborne videography in vineyards. Examples of small aerial platforms include the AR Parrot Drone, the DJI Phantom (1 and 2), and 3D Robotics IRIS+. Both fixed-wing and rotary wings platforms offer numerous advantages for aerial image acquisition including the freedom to obtain high resolution imagery at any time required. Imagery captured can be stored on mobile devices such as an Apple iPad and shared, written directly to a memory stick or card, or saved to the Cloud. The imagery can either be visually interpreted or subjected to semi-automated analysis using digital image processing (DIP) software to extract information about vine status or the vineyard environment. At the ground-level, a radio-controlled 'rugged' model 4x4 vehicle can also be used as a mobile platform to carry a number of sensors (e.g. a Go-Pro camera) around a vineyard, thereby facilitating quick and easy field data collection from both within the vine canopy and rows. For the small vineyard owner/manager with limited financial resources, this technology has a number of distinct advantages to aid in vineyard management practices: it is relatively cheap to purchase; requires a short learning-curve to use and to master; can make use of autonomous ground control units for repetitive coverage enabling reliable monitoring; and information can easily be analysed and integrated within a GIS with minimal expertise. In addition, these platforms make widespread use of familiar and everyday, off-the-shelf technologies such as WiFi, Go-Pro cameras, Cloud computing, and smartphones or tablets as the control interface, all with a large and well established end-user support base. Whilst there are still some limitations which constrain their use, including battery power and flight time, data connectivity, and payload capacity, such platforms nevertheless offer quick, low-cost, easy, and repeatable ways to capture valuable contextual data for small vineyards, complementing other sources of data used in Precision Viticulture (PV) and vineyard management. As these technologies continue to evolve very quickly, and more lightweight sensors become available for the smaller ground and airborne platforms, this will offer even more possibilities for a wider range of information to be acquired to aid in the monitoring, mapping, and management of small vineyards. The paper is illustrated with some examples from the UK and Switzerland.

  3. Application of Near-Surface Remote Sensing and computer algorithms in evaluating impacts of agroecosystem management on Zea mays (corn) phenological development in the Platte River - High Plains Aquifer Long Term Agroecosystem Research Network field sites.

    NASA Astrophysics Data System (ADS)

    Okalebo, J. A.; Das Choudhury, S.; Awada, T.; Suyker, A.; LeBauer, D.; Newcomb, M.; Ward, R.

    2017-12-01

    The Long-term Agroecosystem Research (LTAR) network is a USDA-ARS effort that focuses on conducting research that addresses current and emerging issues in agriculture related to sustainability and profitability of agroecosystems in the face of climate change and population growth. There are 18 sites across the USA covering key agricultural production regions. In Nebraska, a partnership between the University of Nebraska - Lincoln and ARD/USDA resulted in the establishment of the Platte River - High Plains Aquifer LTAR site in 2014. The site conducts research to sustain multiple ecosystem services focusing specifically on Nebraska's main agronomic production agroecosystems that comprise of abundant corn, soybeans, managed grasslands and beef production. As part of the national LTAR network, PR-HPA participates and contributes near-surface remotely sensed imagery of corn, soybean and grassland canopy phenology to the PhenoCam Network through high-resolution digital cameras. This poster highlights the application, advantages and usefulness of near-surface remotely sensed imagery in agroecosystem studies and management. It demonstrates how both Infrared and Red-Green-Blue imagery may be applied to monitor phenological events as well as crop abiotic stresses. Computer-based algorithms and analytic techniques proved very instrumental in revealing crop phenological changes such as green-up and tasseling in corn. This poster also reports the suitability and applicability of corn-derived computer based algorithms for evaluating phenological development of sorghum since both crops have similarities in their phenology; with sorghum panicles being similar to corn tassels. This later assessment was carried out using a sorghum dataset obtained from the Transportation Energy Resources from Renewable Agriculture Phenotyping Reference Platform project, Maricopa Agricultural Center, Arizona.

  4. Exploring the Use of Interactive Digital Storytelling Video: Promoting Student Engagement and Learning in a University Hybrid Course

    ERIC Educational Resources Information Center

    Shelton, Catharyn C.; Warren, Annie E.; Archambault, Leanna M.

    2016-01-01

    This study explores interactive digital storytelling in a university hybrid course. Digital stories leverage imagery and narrative-based content to explore concepts, while appealing to millennials. When digital storytelling is used as the main source of course content, tensions arise regarding how to engage and support student learning while…

  5. 50 CFR 216.155 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...

  6. Digital Video Cameras for Brainstorming and Outlining: The Process and Potential

    ERIC Educational Resources Information Center

    Unger, John A.; Scullion, Vicki A.

    2013-01-01

    This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…

  7. Coming of Age: Polarization as a Probe of Plant Canopy Water Status

    NASA Astrophysics Data System (ADS)

    Vanderbilt, V. C.; Daughtry, C. S. T.; Kupinski, M.; Bradley, C. L.; Dahlgren, R. P.

    2015-12-01

    We tested the hypothesis that the relative water content (RWC) of the sunlit leaves in a plant canopy may be estimated from polarized canopy imagery. Recently (IGARSS, July 27-31, 2015, Milan, Italy), we reported the results of laboratory polarization measurements of single detached leaves during dry down. We found that RWC was linearly related to the ratio of the reflectance of the interior of the leaf and the leaf transmittance. Here we report application of the laboratory results to estimate RWC for sunlit leaves in a plant canopy. Using a commercial-off-the-shelf (COTS) Nikon 810 camera with Nikkor 300 mm lens and Polaroid type HN-22 linear polarizer, we photographed in the principle plane a plant canopy displaying a gradient of water stress and collected, at each of multiple points along the gradient, two images, one with the polarization filter oriented for maximum scene response and a second with the filter oriented for minimum scene response. We converted the digital values in the two images to reflectance factor with reference to images of a white, flat, horizontal Spectralon surface. We classified the polarization imagery, identifying reflecting leaves, transmitting leaves, other sunlit vegetation and shadows. For each image pair we normalized the leaf internal reflectance by dividing by the cosine of the angle of incidence of the sunlight on the leaf, selected the leaf maximum transmittance in the scene and divided to obtain the ratio reflectance/transmittance, which we compared with leaf RWC. We determined the leaf relative water content by harvesting a section of leaf and immediately placing it in a sealed container in an ice chest. Later in the laboratory the leaf sample was weighed, rehydrated, weighed, dried and again weighed. RWC was determined using the standard formula.Our experimental results support our hypothesis, suggesting that the RWC of sunlit leaves in a plant canopy may be estimated from analysis of polarization imagery collected by a COTS camera system. Unlike remotely sensed estimates of canopy equivalent water thickness, our estimates of the RWC of sunlit canopy leaves provide leaf physiological information. We propose RWC estimates based upon sunlit leaves are more relevant to assessing the water status of a plant canopy than would be RWC estimates based upon large FOV canopy measurements.

  8. Coming of Age: Polarization as a Probe of Plant Canopy Water Status

    NASA Technical Reports Server (NTRS)

    Vanderbilt, Vern C.; Daughtry, Craig S. T.; Kupinski, Meredith; Bradley, Christine Lavella; Dahlgren, Robert P.

    2015-01-01

    We tested the hypothesis that the relative water content (RWC) of the sunlit leaves in a plant canopy may be estimated from polarized canopy imagery. Recently (IGARSS, July 27-31, 2015, Milan, Italy), we reported the results of laboratory polarization measurements of single detached leaves during dry down. We found that RWC was linearly related to the ratio of the reflectance of the interior of the leaf and the leaf transmittance. Here we report application of the laboratory results to estimate RWC for sunlit leaves in a plant canopy. Using a commercial-off-the-shelf (COTS) Nikon 810 camera with Nikkor 300 mm lens and Polaroid type HN-22 linear polarizer, we photographed in the principle plane a plant canopy displaying a gradient of water stress and collected, at each of multiple points along the gradient, two images, one with the polarization filter oriented for maximum scene response and a second with the filter oriented for minimum scene response. We converted the digital values in the two images to reflectance factor with reference to images of a white, flat, horizontal Spectralon surface. We classified the polarization imagery, identifying reflecting leaves, transmitting leaves, other sunlit vegetation and shadows. For each image pair we normalized the leaf internal reflectance by dividing by the cosine of the angle of incidence of the sunlight on the leaf, selected the leaf maximum transmittance in the scene and divided to obtain the ratio reflectance/transmittance, which we compared with leaf RWC. We determined the leaf relative water content by harvesting a section of leaf and immediately placing it in a sealed container in an ice chest. Later in the laboratory the leaf sample was weighed, rehydrated, weighed, dried and again weighed. RWC was determined using the standard formula. Our experimental results support our hypothesis, suggesting that the RWC of sunlit leaves in a plant canopy may be estimated from analysis of polarization imagery collected by a COTS camera system. Unlike remotely sensed estimates of canopy equivalent water thickness, our estimates of the RWC of sunlit canopy leaves provide leaf physiological information. We propose RWC estimates based upon sunlit leaves are more relevant to assessing the water status of a plant canopy than would be RWC estimates based upon large FOV canopy measurements.

  9. Conception of a cheap infrared camera using a Fresnel lens

    NASA Astrophysics Data System (ADS)

    Grulois, Tatiana; Druart, Guillaume; Guérineau, Nicolas; Crastes, Arnaud; Sauer, Hervé; Chavel, Pierre

    2014-09-01

    Today huge efforts are made in the research and industrial areas to design compact and cheap uncooled infrared optical systems for low-cost imagery applications. Indeed, infrared cameras are currently too expensive to be widespread. If we manage to cut their cost, we expect to open new types of markets. In this paper, we will present the cheap broadband microimager we have designed. It operates in the long-wavelength infrared range and uses only one silicon lens at a minimal cost for the manufacturing process. Our concept is based on the use of a thin optics. Therefore inexpensive unconventional materials can be used because some absorption can be tolerated. Our imager uses a thin Fresnel lens. Up to now, Fresnel lenses have not been used for broadband imagery applications because of their disastrous chromatic properties. However, we show that working in a high diffraction order can significantly reduce chromatism. A prototype has been made and the performance of our camera will be discussed. Its characterization has been carried out in terms of modulation transfer function (MTF) and noise equivalent temperature difference (NETD). Finally, experimental images will be presented.

  10. Evaluation of a novel laparoscopic camera for characterization of renal ischemia in a porcine model using digital light processing (DLP) hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.

    2012-03-01

    Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.

  11. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  12. Compiling and editing agricultural strata boundaries with remotely sensed imagery and map attribute data using graphics workstations

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1991-01-01

    The USDA presently uses labor-intensive photographic interpretation procedures to delineate large geographical areas into manageable size sampling units for the estimation of domestic crop and livestock production. Computer software to automate the boundary delineation procedure, called the computer-assisted stratification and sampling (CASS) system, was developed using a Hewlett Packard color-graphics workstation. The CASS procedures display Thematic Mapper (TM) satellite digital imagery on a graphics display workstation as the backdrop for the onscreen delineation of sampling units. USGS Digital Line Graph (DLG) data for roads and waterways are displayed over the TM imagery to aid in identifying potential sample unit boundaries. Initial analysis conducted with three Missouri counties indicated that CASS was six times faster than the manual techniques in delineating sampling units.

  13. Picture archiving and computing systems: the key to enterprise digital imaging.

    PubMed

    Krohn, Richard

    2002-09-01

    The utopian view of the electronic medical record includes the digital transformation of all aspects of patient information. Historically, imagery from the radiology, cardiology, ophthalmology, and pathology departments, as well as the emergency room, has been a morass of paper, film, and other media, isolated within each department's system architecture. In answer to this dilemma, picture archiving and computing systems have become the focal point of efforts to create a single platform for the collection, storage, and distribution of clinical imagery throughout the health care enterprise.

  14. Age discrimination among eruptives of Menengai Caldera, Kenya, using vegetation parameters from satellite imagery

    NASA Technical Reports Server (NTRS)

    Blodget, Herbert W.; Heirtzler, James R.

    1993-01-01

    Results are presented of an investigation to determine the degree to which digitally processed Landsat TM imagery can be used to discriminate among vegetated lava flows of different ages in the Menengai Caldera, Kenya. A selective series of five images, consisting of a color-coded Landsat 5 classification and four color composites, are compared with geologic maps. The most recent of more than 70 postcaldera flows within the caldera are trachytes, which are variably covered by shrubs and subsidiary grasses. Soil development evolves as a function of time, and as such supports a changing plant community. Progressively older flows exhibit the increasing dominance of grasses over bushes. The Landsat images correlated well with geologic maps, but the two mapped age classes could be further subdivided on the basis of different vegetation communities. It is concluded that field maps can be modified, and in some cases corrected by use of such imagery, and that digitally enhanced Landsat imagery can be a useful aid to field mapping in similar terrains.

  15. Detection and identification of benthic communities and shoreline features in Biscayne Bay

    NASA Technical Reports Server (NTRS)

    Kolipinski, M. C.; Higer, A. L.

    1970-01-01

    Progress made in the development of a technique for identifying and delinating benthic and shoreline communities using multispectral imagery is described. Images were collected with a multispectral scanner system mounted in a C-47 aircraft. Concurrent with the overflight, ecological ground- and sea-truth information was collected at 19 sites in the bay and on the shore. Preliminary processing of the scanner imagery with a CDC 1604 digital computer provided the optimum channels for discernment among different underwater and coastal objects. Automatic mapping of the benthic plants by multiband imagery and the mapping of isotherms and hydrodynamic parameters by digital model can become an effective predictive ecological tool when coupled together. Using the two systems, it appears possible to predict conditions that could adversely affect the benthic communities. With the advent of the ERTS satellites and space platforms, imagery data could be obtained which, when used in conjunction with water-level and meteorological data, would provide for continuous ecological monitoring.

  16. Mapping Land and Water Surface Topography with instantaneous Structure from Motion

    NASA Astrophysics Data System (ADS)

    Dietrich, J.; Fonstad, M. A.

    2012-12-01

    Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.

  17. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  18. Cost-effective handling of digital medical images in the telemedicine environment.

    PubMed

    Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel

    2007-09-01

    This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd

  19. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1996

    1996-01-01

    Includes abstracts of special interest group (SIG) sessions. Highlights include digital imagery; text summarization; browsing; digital libraries; icons and the Web; information management; curricula planning; interfaces; information systems; theories; scholarly and scientific communication; global development; archives; document delivery;…

  20. Crew Field Notes: A New Tool for Planetary Surface Exploration

    NASA Technical Reports Server (NTRS)

    Horz, Friedrich; Evans, Cynthia; Eppler, Dean; Gernhardt, Michael; Bluethmann, William; Graf, Jodi; Bleisath, Scott

    2011-01-01

    The Desert Research and Technology Studies (DRATS) field tests of 2010 focused on the simultaneous operation of two rovers, a historical first. The complexity and data volume of two rovers operating simultaneously presented significant operational challenges for the on-site Mission Control Center, including the real time science support function. The latter was split into two "tactical" back rooms, one for each rover, that supported the real time traverse activities; in addition, a "strategic" science team convened overnight to synthesize the day's findings, and to conduct the strategic forward planning of the next day or days as detailed in [1, 2]. Current DRATS simulations and operations differ dramatically from those of Apollo, including the most evolved Apollo 15-17 missions, due to the advent of digital technologies. Modern digital still and video cameras, combined with the capability for real time transmission of large volumes of data, including multiple video streams, offer the prospect for the ground based science support room(s) in Mission Control to witness all crew activities in unprecedented detail and in real time. It was not uncommon during DRATS 2010 that each tactical science back room simultaneously received some 4-6 video streams from cameras mounted on the rover or the crews' backpacks. Some of the rover cameras are controllable PZT (pan, zoom, tilt) devices that can be operated by the crews (during extensive drives) or remotely by the back room (during EVAs). Typically, a dedicated "expert" and professional geologist in the tactical back room(s) controls, monitors and analyses a single video stream and provides the findings to the team, commonly supported by screen-saved images. It seems obvious, that the real time comprehension and synthesis of the verbal descriptions, extensive imagery, and other information (e.g. navigation data; time lines etc) flowing into the science support room(s) constitute a fundamental challenge to future mission operations: how can one analyze, comprehend and synthesize -in real time- the enormous data volume coming to the ground? Real time understanding of all data is needed for constructive interaction with the surface crews, and it becomes critical for the strategic forward planning process.

  1. An evaluation of new high resolution image collection and processing techniques for estimating shrub cover and detecting landscape changes associated with military training in arid lands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, D.J.; Ostler, W.K.

    2000-02-01

    Research funded by the US Department of Defense, US Department of Energy, and the US Environmental Protection Agency as part of Project CS-1131 of the Strategic Environmental Research and Development Program evaluated novel techniques for collecting high-resolution images in the Mojave Desert using helicopters, helium-filled blimps, kites, and hand-held telescoping poles at heights from 1 to 150 meters. Several camera types, lens, films, and digital techniques were evaluated on the basis of their ability to correctly estimate canopy cover of shrubs. A high degree of accuracy was obtained with photo scales of 1:4,000 or larger and flatbed scanning rates frommore » films or prints of 300 lines per inch or larger. Smaller scale images were of value in detecting retrospective changes in cover of large shrubs, but failed to detect smaller shrubs. Excellent results were obtained using inexpensive 35-millimeter cameras and new super-fine grain film such as Kodak's Royal Gold{trademark} (ASA 100) film or megapixel digital cameras. New image-processing software, such as SigmaScan Pro{trademark}, makes it possible to accurately measure areas up to 1 hectare in size for total cover and density in 10 minutes compared to several hours or days of field work. In photographs with scales of 1:1,000 and 1:2,000, it was possible to detect cover and density of up to four dominant shrub species. Canopy cover and other parameters such as width, length, feet diameter, and shape factors can be nearly instantaneously measured for each individual shrub yielding size distribution histograms and other statistical data on plant community structure. Use of the technique is being evaluated in a four-year study of military training impacts at Fort Irwin, California, and results compared with image processing using conventional aerial photography and satellite imagery, including the new 1-meter pixel IKONOS images. The technique is a valuable new emerging tool to accurately assess vegetation structure and landscape changes due to military or other land-use disturbances.« less

  2. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  3. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    NASA Astrophysics Data System (ADS)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  4. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  5. Bringing the Digital Camera to the Physics Lab

    ERIC Educational Resources Information Center

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-01-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…

  6. Development of a digital camera tree evaluation system

    Treesearch

    Neil Clark; Daniel L. Schmoldt; Philip A. Araman

    2000-01-01

    Within the Strategic Plan for Forest Inventory and Monitoring (USDA Forest Service 1998), there is a call to "conduct applied research in the use of [advanced technology] towards the end of increasing the operational efficiency and effectiveness of our program". The digital camera tree evaluation system is part of that research, aimed at decreasing field...

  7. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera

    NASA Astrophysics Data System (ADS)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert

    2018-03-01

    Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.

  8. Multiple-camera tracking: UK government requirements

    NASA Astrophysics Data System (ADS)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  9. GIS Integration for Quantitatively Determining the Capabilities of Five Remote Sensors for Resource Exploration

    NASA Technical Reports Server (NTRS)

    Pascucci, R. F.; Smith, A.

    1982-01-01

    To assist the U.S. Geological Survey in carrying out a Congressional mandate to investigate the use of side-looking airborne radar (SLAR) for resources exploration, a research program was conducted to define the contribution of SLAR imagery to structural geologic mapping and to compare this with contributions from other remote sensing systems. Imagery from two SLAR systems and from three other remote sensing systems was interpreted, and the resulting information was digitized, quantified and intercompared using a computer-assisted geographic information system (GIS). The study area covers approximately 10,000 square miles within the Naval Petroleum Reserve, Alaska, and is situated between the foothills of the Brooks Range and the North Slope. The principal objectives were: (1) to establish quantitatively, the total information contribution of each of the five remote sensing systems to the mapping of structural geology; (2) to determine the amount of information detected in common when the sensors are used in combination; and (3) to determine the amount of unique, incremental information detected by each sensor when used in combination with others. The remote sensor imagery that was investigated included real-aperture and synthetic-aperture radar imagery, standard and digitally enhanced LANDSAT MSS imagery, and aerial photos.

  10. Distributing digital video to multiple computers

    PubMed Central

    Murray, James A.

    2004-01-01

    Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464

  11. Testing and Validation of Timing Properties for High Speed Digital Cameras - A Best Practices Guide

    DTIC Science & Technology

    2016-07-27

    a five year plan to begin replacing its inventory of antiquated film and video systems with more modern and capable digital systems. As evidenced in...installation, testing, and documentation of DITCS. If shop support can be accelerated due to shifting mission priorities, this schedule can likely...assistance from the machine shop , welding shop , paint shop , and carpenter shop . Testing the DITCS system will require a KTM with digital cameras and

  12. Dynamic photoelasticity by TDI imaging

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    2001-06-01

    High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.

  13. Comparison of 10 digital SLR cameras for orthodontic photography.

    PubMed

    Bister, D; Mordarai, F; Aveling, R M

    2006-09-01

    Digital photography is now widely used to document orthodontic patients. High quality intra-oral photography depends on a satisfactory 'depth of field' focus and good illumination. Automatic 'through the lens' (TTL) metering is ideal to achieve both the above aims. Ten current digital single lens reflex (SLR) cameras were tested for use in intra- and extra-oral photography as used in orthodontics. The manufacturers' recommended macro-lens and macro-flash were used with each camera. Handling characteristics, colour-reproducibility, quality of the viewfinder and flash recharge time were investigated. No camera took acceptable images in factory default setting or 'automatic' mode: this mode was not present for some cameras (Nikon, Fujifilm); led to overexposure (Olympus) or poor depth of field (Canon, Konica-Minolta, Pentax), particularly for intra-oral views. Once adjusted, only Olympus cameras were able to take intra- and extra-oral photographs without the need to change settings, and were therefore the easiest to use. All other cameras needed adjustments of aperture (Canon, Konica-Minolta, Pentax), or aperture and flash (Fujifilm, Nikon), making the latter the most complex to use. However, all cameras produced high quality intra- and extra-oral images, once appropriately adjusted. The resolution of the images is more than satisfactory for all cameras. There were significant differences relating to the quality of colour reproduction, size and brightness of the viewfinders. The Nikon D100 and Fujifilm S 3 Pro consistently scored best for colour fidelity. Pentax and Konica-Minolta had the largest and brightest viewfinders.

  14. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  15. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  16. Digital image registration method based upon binary boundary maps

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Andrus, J. F.; Campbell, C. W.

    1974-01-01

    A relatively fast method is presented for matching or registering the digital data of imagery from the same ground scene acquired at different times, or from different multispectral images, sensors, or both. It is assumed that the digital images can be registed by using translations and rotations only, that the images are of the same scale, and that little or no distortion exists between images. It is further assumed that by working with several local areas of the image, the rotational effects in the local areas can be neglected. Thus, by treating the misalignments of local areas as translations, it is possible to determine rotational and translational misalignments for a larger portion of the image containing the local areas. This procedure of determining the misalignment and then registering the data according to the misalignment can be repeated until the desired degree of registration is achieved. The method to be presented is based upon the use of binary boundary maps produced from the raw digital imagery rather than the raw digital data.

  17. A methodology to generate high-resolution digital elevation model (DEM) and surface water profile for a physical model using close range photogrammetric (CRP) technique

    NASA Astrophysics Data System (ADS)

    Mali, V. K.; Kuiry, S. N.

    2015-12-01

    Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.

  18. A methodology to generate high-resolution digital elevation model (DEM) and surface water profile for a physical model using close range photogrammetric (CRP) technique

    NASA Astrophysics Data System (ADS)

    Méndez Incera, F. J.; Erikson, L. H.; Ruggiero, P.; Barnard, P.; Camus, P.; Rueda Zamora, A. C.

    2014-12-01

    Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.

  19. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  20. Application of a Very-Low-Cost Unmanned Aerial Vehicle (UAV) and Consumer Grade Camera for the Collection of Research Grade Data: Preliminary Findings

    NASA Astrophysics Data System (ADS)

    Christian, P.; Davis, J. D.; Blesius, L.

    2013-12-01

    The use of UAV technology in the field of geoscience research has grown almost exponentially in the last decade. UAVs have been utilized as a sensor platform in many fields including geology, biology, climatology, geomorphology and archaeology. A UAV's ability to fly frequently, at very low altitude, and at relatively little cost makes them a perfect compromise between free, low temporal and spatial resolution satellite data and terrestrial based survey when there are insufficient funds to purchase custom satellite or manned aircraft data. Unfortunately, many available UAVs for research are still relatively expensive and often have predetermined imaging systems. However, the proliferation of hobbyist grade UAVs and consumer point and shoot cameras may provide many research projects with an alternative that is both cost-effective and efficient in data collection. This study therefore seeks to answer the question, can these very low cost, hobby-grade UAVs be used to produce research grade data. To achieve this end, in December of 2012 a small grant was obtained (<$6500) to set up a complete UAV system and to employ it in a diverse range of research. The system is comprised of a 3D Robotics hexacopter, Ardupilot automated flight hardware and software, spare parts and tool kit, two Canon point-and-shoot cameras including one modified for near infrared imagery, and a field laptop. To date, successful research flights have been flown for geomorphic research in degraded and restored montane meadows to study stream channel formation using both visible and near infrared imagery as well as for the creation of digital elevation models of large hillslope gullies using structure from motion (SFM). Other applications for the hexacopter, in progress or planned, include landslide monitoring, vegetation monitoring and mapping using the normalized difference vegetation index, archaeological survey, and bird nest identification on small rock islands. An analysis of the results produced so far indicates that this low-cost approach can be used to gather relevant research data but there are significant downsides to using equipment designed for hobbyists and the public rather than that which has been designed primarily for research. Specifically, the repurposing and maintenance of the low-cost equipment greatly increases the time needed before quality data can be obtained.

  1. Toward the light field display: autostereoscopic rendering via a cluster of projectors.

    PubMed

    Yang, Ruigang; Huang, Xinyu; Li, Sifang; Jaynes, Christopher

    2008-01-01

    Ultimately, a display device should be capable of reproducing the visual effects observed in reality. In this paper we introduce an autostereoscopic display that uses a scalable array of digital light projectors and a projection screen augmented with microlenses to simulate a light field for a given three-dimensional scene. Physical objects emit or reflect light in all directions to create a light field that can be approximated by the light field display. The display can simultaneously provide many viewers from different viewpoints a stereoscopic effect without head tracking or special viewing glasses. This work focuses on two important technical problems related to the light field display; calibration and rendering. We present a solution to automatically calibrate the light field display using a camera and introduce two efficient algorithms to render the special multi-view images by exploiting their spatial coherence. The effectiveness of our approach is demonstrated with a four-projector prototype that can display dynamic imagery with full parallax.

  2. Uav Application in Coastal Environment, Example of the Oleron Island for Dunes and Dikes Survey

    NASA Astrophysics Data System (ADS)

    Guillot, B.; Pouget, F.

    2015-08-01

    The recent evolutions in civil UAV ease of use led the University of La Rochelle to conduct an UAV program around its own potential costal application. An application program involving La Rochelle University and the District of Oleron Island began in January 2015 and lasted through July of 2015. The aims were to choose 9 study areas and survey them during the winter season. The studies concerned surveying the dikes and coastal sand dunes of Oleron Island. During each flight, an action sport camera fixed on the UAV's brushless gimbal took a series of 150 pictures. After processing the photographs and using a 3D reconstruction plugin via Photoscan, we were able to export high-resolution ortho-imagery, DSM and 3D models. After applying GIS treatment to these images, volumetric evolutions between flights were revealed through a DDVM (Difference of Digital volumetric Model), in order to study sand movements on coastal sand dunes.

  3. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  4. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    NASA Astrophysics Data System (ADS)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  5. American Carrier Air Power at the Dawn of a New Century

    DTIC Science & Technology

    2005-01-01

    Systems, Office of the Secretary of Defense (Operational Test and Evaluation); then–Commander Calvin Craig, OPNAV N81; Captain Kenneth Neubauer and...TACP Tactical Air Control Party TARPS Tactical Air Reconnaissance Pod System TCS Television Camera System TLAM Tomahawk Land-Attack Missile TST Time...store any video imagery acquired by the aircraft’s systems, including the TARPS pod, the pilot’s head-up display (HUD), the Television Camera System (TCS

  6. American Society of Photogrammetry and American Congress on Surveying and Mapping, Fall Technical Meeting, ASP Technical Papers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1981-01-01

    Various topics in the field of photogrammetry are addressed. Among the subjects discussed are: remote sensing of Gulf Stream dynamics using VHRR satellite imagery an interactive rectification system for remote sensing imagery use of a single photo and digital terrain matrix for point positioning crop type analysis using Landsat digital data use of a fisheye lens in solar energy assessment remote sensing inventory of Rocky Mountain elk habitat Washington state's large scale ortho program educational image processing. Also discussed are: operational advantages of on-line photogrammetric triangulation analysis of fracturation field photogrammetry as a tool for measuring glacier movement double modelmore » orthophotos used for forest inventory mapping map revisioning module for the Kern PG2 stereoplotter assessing accuracy of digital land-use and terrain data accuracy of earthwork calculations from digital elevation data.« less

  7. Precise Ortho Imagery as the Source for Authoritative Airport Mapping

    NASA Astrophysics Data System (ADS)

    Howard, H.; Hummel, P.

    2016-06-01

    As the aviation industry moves from paper maps and charts to the digital cockpit and electronic flight bag, producers of these products need current and accurate data to ensure flight safety. FAA (Federal Aviation Administration) and ICAO (International Civil Aviation Organization) require certified suppliers to follow a defined protocol to produce authoritative map data for the aerodrome. Typical airport maps have been produced to meet 5 m accuracy requirements. The new digital aviation world is moving to 1 m accuracy maps to provide better situational awareness on the aerodrome. The commercial availability of 0.5 m satellite imagery combined with accurate ground control is enabling the production of avionics certified .85 m orthophotos of airports around the globe. CompassData maintains an archive of over 400+ airports as source data to support producers of 1 m certified Aerodrome Mapping Database (AMDB) critical to flight safety and automated situational awareness. CompassData is a DO200A certified supplier of authoritative orthoimagery and attendees will learn how to utilize current airport imagery to build digital aviation mapping products.

  8. Fundamentals of Acoustic Backscatter Imagery

    DTIC Science & Technology

    1997-10-20

    in HYSAS of the acoustic imagery layer of the Master Seafloor Digital Database (MSDDB). Manuscript approved December 19, 1996 2 Clyde E. Nishimura 1.1...than for sidescan systems. Refraction is simply described by Snell’s law, which is derived from the eikonal equation and Fermat’s principle, and can

  9. Software development and its description for Geoid determination based on Spherical-Cap-Harmonics Modelling using digital-zenith camera and gravimetric measurements hybrid data

    NASA Astrophysics Data System (ADS)

    Morozova, K.; Jaeger, R.; Balodis, J.; Kaminskis, J.

    2017-10-01

    Over several years the Institute of Geodesy and Geoinformatics (GGI) was engaged in the design and development of a digital zenith camera. At the moment the camera developments are finished and tests by field measurements are done. In order to check these data and to use them for geoid model determination DFHRS (Digital Finite element Height reference surface (HRS)) v4.3. software is used. It is based on parametric modelling of the HRS as a continous polynomial surface. The HRS, providing the local Geoid height N, is a necessary geodetic infrastructure for a GNSS-based determination of physcial heights H from ellipsoidal GNSS heights h, by H=h-N. The research and this publication is dealing with the inclusion of the data of observed vertical deflections from digital zenith camera into the mathematical model of the DFHRS approach and software v4.3. A first target was to test out and validate the mathematical model and software, using additionally real data of the above mentioned zenith camera observations of deflections of the vertical. A second concern of the research was to analyze the results and the improvement of the Latvian quasi-geoid computation compared to the previous version HRS computed without zenith camera based deflections of the vertical. The further development of the mathematical model and software concerns the use of spherical-cap-harmonics as the designed carrier function for the DFHRS v.5. It enables - in the sense of the strict integrated geodesy approach, holding also for geodetic network adjustment - both a full gravity field and a geoid and quasi-geoid determination. In addition, it allows the inclusion of gravimetric measurements, together with deflections of the vertical from digital-zenith cameras, and all other types of observations. The theoretical description of the updated version of DFHRS software and methods are discussed in this publication.

  10. Development of Camera Model and Geometric Calibration/validation of Xsat IRIS Imagery

    NASA Astrophysics Data System (ADS)

    Kwoh, L. K.; Huang, X.; Tan, W. J.

    2012-07-01

    XSAT, launched on 20 April 2011, is the first micro-satellite designed and built in Singapore. It orbits the Earth at altitude of 822 km in a sun synchronous orbit. The satellite carries a multispectral camera IRIS with three spectral bands - 0.52~0.60 mm for Green, 0.63~0.69 mm for Red and 0.76~0.89 mm for NIR at 12 m resolution. In the design of IRIS camera, the three bands were acquired by three lines of CCDs (NIR, Red and Green). These CCDs were physically separated in the focal plane and their first pixels not absolutely aligned. The micro-satellite platform was also not stable enough to allow for co-registration of the 3 bands with simple linear transformation. In the camera model developed, this platform stability was compensated with 3rd to 4th order polynomials for the satellite's roll, pitch and yaw attitude angles. With the camera model, the camera parameters such as the band to band separations, the alignment of the CCDs relative to each other, as well as the focal length of the camera can be validated or calibrated. The results of calibration with more than 20 images showed that the band to band along-track separation agreed well with the pre-flight values provided by the vendor (0.093° and 0.046° for the NIR vs red and for green vs red CCDs respectively). The cross-track alignments were 0.05 pixel and 5.9 pixel for the NIR vs red and green vs red CCDs respectively. The focal length was found to be shorter by about 0.8%. This was attributed to the lower operating temperature which XSAT is currently operating. With the calibrated parameters and the camera model, a geometric level 1 multispectral image with RPCs can be generated and if required, orthorectified imagery can also be produced.

  11. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  12. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  13. Comparative study of the polaroid and digital non-mydriatic cameras in the detection of referrable diabetic retinopathy in Australia.

    PubMed

    Phiri, R; Keeffe, J E; Harper, C A; Taylor, H R

    2006-08-01

    To show that the non-mydriatic retinal camera (NMRC) using polaroid film is as effective as the NMRC using digital imaging in detecting referrable retinopathy. A series of patients with diabetes attending the eye out-patients department at the Royal Victorian Eye and Ear Hospital had single-field non-mydriatic fundus photographs taken using first a digital and then a polaroid camera. Dilated 30 degrees seven-field stereo fundus photographs were then taken of each eye as the gold standard. The photographs were graded in a masked fashion. Retinopathy levels were defined using the simplified Wisconsin Grading system. We used the kappa statistics for inter-reader and intrareader agreement and the generalized linear model to derive the odds ratio. There were 196 participants giving 325 undilated retinal photographs. Of these participants 111 (57%) were males. The mean age of the patients was 68.8 years. There were 298 eyes with all three sets of photographs from 154 patients. The digital NMRC had a sensitivity of 86.2%[95% confidence interval (CI) 65.8, 95.3], whilst the polaroid NMRC had a sensitivity of 84.1% (95% CI 65.5, 93.7). The specificities of the two cameras were identical at 71.2% (95% CI 58.8, 81.1). There was no difference in the ability of the polaroid and digital camera to detect referrable retinopathy (odds ratio 1.06, 95% CI 0.80, 1.40, P = 0.68). This study suggests that non-mydriatic retinal photography using polaroid film is as effective as digital imaging in the detection of referrable retinopathy in countries such as the USA and Australia or others that use the same criterion for referral.

  14. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  15. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  16. Far-ultraviolet imagery of the Orion Nebula

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1977-01-01

    Two electrographic cameras carried on a sounding rocket have yielded useful-resolution far-ultraviolet (1000-2000 A) imagery of the Orion Nebula. The brightness distribution in the images is consistent with a primary source which is due to scattering of starlight by dust grains, although an emission-line contribution, particularly in the fainter outer regions, is not ruled out. The results are consistent with an albedo of the dust grains that is high in the far-ultraviolet and which increases toward shorter wavelengths below 1230 A.

  17. Current progress in multiple-image blind demixing algorithms

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    2000-06-01

    Imagery edges occur naturally in human visual systems as a consequence of redundancy reduction towards `sparse and orthogonality feature maps,' which have been recently derived from the maximum entropy information-theoretical first principle of artificial neural networks. After a brief match review of such an Independent Component Analysis or Blind Source Separation of edge maps, we explore the de- mixing condition for more than two imagery objects recognizable by an intelligent pair of cameras with memory in a time-multiplex fashion.

  18. Bringing the Digital Camera to the Physics Lab

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-03-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as we examine in this work.

  19. The Setup Phase of Project Open Book: A Report to the Commission on Preservation and Access on the Status of an Effort to Convert Microfilm to Digital Imagery.

    ERIC Educational Resources Information Center

    Conway, Paul; Weaver, Shari

    1994-01-01

    This report documents the second phase of Yale University's Project Open Book, which explored the uses of digital technology for preservation of and access to deteriorating documents. Highlights include preconditions for project implementation; quality digital conversion; characteristics of source materials; digital document indexing; workflow…

  20. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    NASA Astrophysics Data System (ADS)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured with the device's digital camera, and an interface is available for annotating (interpreting) the image using lines and polygons. Image-to-geometry registration is then performed using a developed algorithm, initialised using the coarse pose from the on-board orientation and positioning sensors. The annotations made on the captured images are then available in the 3D model coordinate system for overlay and export. This workflow allows geologists to make interpretations and conceptual models in the field, which can then be linked to and refined in office workflows for later MPS property modelling.

  1. An Insect Eye Inspired Miniaturized Multi-Camera System for Endoscopic Imaging.

    PubMed

    Cogal, Omer; Leblebici, Yusuf

    2017-02-01

    In this work, we present a miniaturized high definition vision system inspired by insect eyes, with a distributed illumination method, which can work in dark environments for proximity imaging applications such as endoscopy. Our approach is based on modeling biological systems with off-the-shelf miniaturized cameras combined with digital circuit design for real time image processing. We built a 5 mm radius hemispherical compound eye, imaging a 180 ° ×180 ° degrees field of view while providing more than 1.1 megapixels (emulated ommatidias) as real-time video with an inter-ommatidial angle ∆ϕ = 0.5 ° at 18 mm radial distance. We made an FPGA implementation of the image processing system which is capable of generating 25 fps video with 1080 × 1080 pixel resolution at a 120 MHz processing clock frequency. When compared to similar size insect eye mimicking systems in literature, the system proposed in this paper features 1000 × resolution increase. To the best of our knowledge, this is the first time that a compound eye with built-in illumination idea is reported. We are offering our miniaturized imaging system for endoscopic applications like colonoscopy or laparoscopic surgery where there is a need for large field of view high definition imagery. For that purpose we tested our system inside a human colon model. We also present the resulting images and videos from the human colon model in this paper.

  2. A Digital Approach to Learning Petrology

    NASA Astrophysics Data System (ADS)

    Reid, M. R.

    2011-12-01

    In the undergraduate igneous and metamorphic petrology course at Northern Arizona University, we are employing petrographic microscopes equipped with relatively inexpensive ( $200) digital cameras that are linked to pen-tablet computers. The camera-tablet systems can assist student learning in a variety of ways. Images provided by the tablet computers can be used for helping students filter the visually complex specimens they examine. Instructors and students can simultaneously view the same petrographic features captured by the cameras and exchange information about them by pointing to salient features using the tablet pen. These images can become part of a virtual mineral/rock/texture portfolio tailored to individual student's needs. Captured digital illustrations can be annotated with digital ink or computer graphics tools; this activity emulates essential features of more traditional line drawings (visualizing an appropriate feature and selecting a representative image of it, internalizing the feature through studying and annotating it) while minimizing the frustration that many students feel about drawing. In these ways, we aim to help a student progress more efficiently from novice to expert. A number of our petrology laboratory exercises involve use of the camera-tablet systems for collaborative learning. Observational responsibilities are distributed among individual members of teams in order to increase interdependence and accountability, and to encourage efficiency. Annotated digital images are used to share students' findings and arrive at an understanding of an entire rock suite. This interdependence increases the individual's sense of responsibility for their work, and reporting out encourages students to practice use of technical vocabulary and to defend their observations. Pre- and post-course student interest in the camera-tablet systems has been assessed. In a post-course survey, the majority of students reported that, if available, they would use camera-tablet systems to capture microscope images (77%) and to make notes on images (71%). An informal focus group recommended introducing the cameras as soon as possible and having them available for making personal mineralogy/petrology portfolios. Because the stakes are perceived as high, use of the camera-tablet systems for peer-peer learning has been progressively modified to bolster student confidence in their collaborative efforts.

  3. A Fully Automated Method of Locating Building Shadows for Aerosol Optical Depth Calculations in High-Resolution Satellite Imagery

    DTIC Science & Technology

    2010-09-01

    absorption, limiting the effectiveness of intelligence collection and weapon systems that operate in those portions of the spectrum by reducing the amount of... Intelligence Agency Web site in NITF 2.0 format. This study used basic imagery from DigitalGlobe (QuickBird, WorldView-1). This imagery is not...databases. Militarily, FASTEC could enable in-scene correction in intelligence collection and possibly influence electro- optical targeting decisions

  4. Monitoring land degradation in southern Tunisia: A test of LANDSAT imagery and digital data

    NASA Technical Reports Server (NTRS)

    Hellden, U.; Stern, M.

    1980-01-01

    The possible use of LANDSAT imagery and digital data for monitoring desertification indicators in Tunisia was studied. Field data were sampled in Tunisia for estimation of mapping accuracy in maps generated through interpretation of LANDSAT false color composites and processing of LANDSAT computer compatible tapes respectively. Temporal change studies were carried out through geometric registration of computer classified windows from 1972 to classified data from 1979. Indications on land degradation were noted in some areas. No important differences, concerning results, between the interpretation approach and the computer processing approach were found.

  5. Natural resources research and development in Lesotho using LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Jackson, A. A. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A map of the drainage of the whole country to include at least third order streams was constructed from LANDSAT imagery. This was digitized and can be plotted at any required scale to provide base maps for other cartographic projects. A suite of programs for the interpretation of digital LANDSAT data is under development for a low cost programmable calculator. Initial output from these programs has proved to have better resolution and detail than the standard photographic products, and was to update the standard topographic map of a particular region.

  6. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  7. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  8. Cartography for lunar exploration: 2008 status and mission plans

    USGS Publications Warehouse

    Kirk, R.L.; Archinal, B.A.; Gaddis, L.R.; Rosiek, M.R.; Chen, Jun; Jiang, Jie; Nayak, Shailesh

    2008-01-01

    The initial spacecraft exploration of the Moon in the 1960s-70s yielded extensive data, primarily in the form of film and television images, which were used to produce a large number of hardcopy maps by conventional techniques. A second era of exploration, beginning in the early 1990s, has produced digital data including global multispectral imagery and altimetry, from which a new generation of digital map products tied to a rapidly evolving global control network has been made. Efforts are also underway to scan the earlier hardcopy maps for online distribution and to digitize the film images so that modern processing techniques can be used to make high-resolution digital terrain models (DTMs) and image mosaics consistent with the current global control. The pace of lunar exploration is accelerating dramatically, with as many as eight new missions already launched or planned for the current decade. These missions, of which the most important for cartography are SMART-1 (Europe), Kaguya/SELENE (Japan), Chang'e-1 (China), Chandrayaan-1 (India), and Lunar Reconnaissance Orbiter (USA), will return a volume of data exceeding that of all previous lunar and planetary missions combined. Framing and scanner camera images, including multispectral and stereo data, hyperspectral images, synthetic aperture radar (SAR) images, and laser altimetry will all be collected, including, in most cases, multiple data sets of each type. Substantial advances in international standardization and cooperation, development of new and more efficient data processing methods, and availability of resources for processing and archiving will all be needed if the next generation of missions are to fulfill their potential for high-precision mapping of the Moon in support of subsequent exploration and scientific investigation.

  9. sUAS for Rapid Pre-Storm Coastal Characterization and Vulnerability Assessment

    NASA Astrophysics Data System (ADS)

    Brodie, K. L.; Slocum, R. K.; Spore, N.

    2015-12-01

    Open coast beaches and surf-zones are dynamic three-dimensional environments that can evolve rapidly on the time-scale of hours in response to changing environmental conditions. Up-to-date knowledge about the pre-storm morphology of the coast can be instrumental in making accurate predictions about coastal change and damage during large storms like Hurricanes and Nor'Easters. For example, alongshore variations in the shape of ephemeral sandbars along the coastline can focus wave energy, subjecting different stretches of coastline to significantly higher waves. Variations in beach slope and width can also alter wave runup, causing higher wave-induced water levels which can cause overwash or inlet breaching. Small Unmanned Aerial Systems (sUAS) offer a new capability to rapidly and inexpensively map vulnerable coastlines in advance of approaching storms. Here we present results from a prototype system that maps coastal topography and surf-zone morphology utilizing a multi-camera sensor. Structure-from-motion algorithms are used to generate topography and also constrain the trajectory of the sUAS. These data, in combination with mount boresight information, are used to rectify images from ocean-facing cameras. Images from all cameras are merged to generate a wide field of view allowing up to 5 minutes of continuous imagery time-series to be collected as the sUAS transits the coastline. Water imagery is then analyzed using wave-kinematics algorithms to provide information on surf-zone bathymetry. To assess this methodology, the absolute and relative accuracy of topographic data are evaluated in relation to simultaneously collected terrestrial lidar data. Ortho-rectification of water imagery is investigated using visible fixed targets installed in the surf-zone, and through comparison to stationary tower-based imagery. Future work will focus on evaluating how topographic and bathymetric data from this sUAS approach can be used to update forcing parameters in both empirical and numerical models predicting coast inundation and erosion in advance of storms.

  10. Report of the facility definition team spacelab UV-Optical Telescope Facility

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Scientific requirements for the Spacelab Ultraviolet-Optical Telescope (SUOT) facility are presented. Specific programs involving high angular resolution imagery over wide fields, far ultraviolet spectroscopy, precisely calibrated spectrophotometry and spectropolarimetry over a wide wavelength range, and planetary studies, including high resolution synoptic imagery, are recommended. Specifications for the mounting configuration, instruments for the mounting configuration, instrument mounting system, optical parameters, and the pointing and stabilization system are presented. Concepts for the focal plane instruments are defined. The functional requirements of the direct imaging camera, far ultraviolet spectrograph, and the precisely calibrated spectrophotometer are detailed, and the planetary camera concept is outlined. Operational concepts described in detail are: the makeup and functions of shuttle payload crew, extravehicular activity requirements, telescope control and data management, payload operations control room, orbital constraints, and orbital interfaces (stabilization, maneuvering requirements and attitude control, contamination, utilities, and payload weight considerations).

  11. Autonomous Exploration for Gathering Increased Science

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin J.; Castano, Rebecca; Estlin, Tara A.; Gaines, Daniel M.; Anderson, Robert C.; Thompson, David R.; DeGranville, Charles K.; Chien, Steve A.; Tang, Benyang; Burl, Michael C.; hide

    2010-01-01

    The Autonomous Exploration for Gathering Increased Science System (AEGIS) provides automated targeting for remote sensing instruments on the Mars Exploration Rover (MER) mission, which at the time of this reporting has had two rovers exploring the surface of Mars (see figure). Currently, targets for rover remote-sensing instruments must be selected manually based on imagery already on the ground with the operations team. AEGIS enables the rover flight software to analyze imagery onboard in order to autonomously select and sequence targeted remote-sensing observations in an opportunistic fashion. In particular, this technology will be used to automatically acquire sub-framed, high-resolution, targeted images taken with the MER panoramic cameras. This software provides: 1) Automatic detection of terrain features in rover camera images, 2) Feature extraction for detected terrain targets, 3) Prioritization of terrain targets based on a scientist target feature set, and 4) Automated re-targeting of rover remote-sensing instruments at the highest priority target.

  12. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    NASA Astrophysics Data System (ADS)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  13. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  14. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  15. EAARL coastal topography and imagery–Western Louisiana, post-Hurricane Rita, 2005: First surface

    USGS Publications Warehouse

    Bonisteel-Cormier, Jamie M.; Wright, Wayne C.; Fredericks, Alexandra M.; Klipp, Emily S.; Nagle, Doug B.; Sallenger, Asbury H.; Brock, John C.

    2013-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, Virginia. This project provides highly detailed and accurate datasets of a portion of the Louisiana coastline beachface, acquired post-Hurricane Rita on September 27-28 and October 2, 2005. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the National Aeronautics and Space Administration (NASA) Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Lidar for Science and Resource Management Website.

  16. Automated Meteor Detection by All-Sky Digital Camera Systems

    NASA Astrophysics Data System (ADS)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  17. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.

    PubMed

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-11-17

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.

  18. Monitoring height and greenness of non-woody floodplain vegetation with UAV time series

    NASA Astrophysics Data System (ADS)

    van Iersel, Wimala; Straatsma, Menno; Addink, Elisabeth; Middelkoop, Hans

    2018-07-01

    Vegetation in river floodplains has important functions for biodiversity, but can also have a negative influence on flood safety. Floodplain vegetation is becoming increasingly heterogeneous in space and time as a result of river restoration projects. To document the spatio-temporal patterns of the floodplain vegetation, the need arises for efficient monitoring techniques. Monitoring is commonly performed by mapping floodplains based on single-epoch remote sensing data, thereby not considering seasonal dynamics of vegetation. The rising availability of unmanned airborne vehicles (UAV) increases monitoring frequency potential. Therefore, we aimed to evaluate the performance of multi-temporal high-spatial-resolution imagery, collected with a UAV, to record the dynamics in floodplain vegetation height and greenness over a growing season. Since the classification accuracy of current airborne surveys remains insufficient for low vegetation types, we focussed on seasonal variation of herbaceous and grassy vegetation with a height up to 3 m. Field reference data on vegetation height were collected six times during one year in 28 field plots within a single floodplain along the Waal River, the main distributary of the Rhine River in the Netherlands. Simultaneously with each field survey, we recorded UAV true-colour and false-colour imagery from which normalized digital surface models (nDSMs) and a consumer-grade camera vegetation index (CGCVI) were calculated. We observed that: (1) the accuracy of a UAV-derived digital terrain model (DTM) varies over the growing season and is most accurate during winter when the vegetation is dormant, (2) vegetation height can be determined from the nDSMs in leaf-on conditions via linear regression (RSME = 0.17-0.33 m), (3) the multitemporal nDSMs yielded meaningful temporal profiles of greenness and vegetation height and (4) herbaceous vegetation shows hysteresis for greenness and vegetation height, but no clear hysteresis was observed for grassland vegetation. These results show the high potential of using UAV-borne sensors for increasing the classification accuracy of low floodplain vegetation within the framework of floodplain monitoring.

  19. Landslide Mapping Using Imagery Acquired by a Fixed-Wing Uav

    NASA Astrophysics Data System (ADS)

    Rau, J. Y.; Jhan, J. P.; Lo, C. F.; Lin, Y. S.

    2011-09-01

    In Taiwan, the average annual rainfall is about 2,500 mm, about three times the world average. Hill slopes where are mostly under meta-stable conditions due to fragmented surface materials can easily be disturbed by heavy typhoon rainfall and/or earthquakes, resulting in landslides and debris flows. Thus, an efficient data acquisition and disaster surveying method is critical for decision making. Comparing with satellite and airplane, the unmanned aerial vehicle (UAV) is a portable and dynamic platform for data acquisition. In particularly when a small target area is required. In this study, a fixed-wing UAV that equipped with a consumer grade digital camera, i.e. Canon EOS 450D, a flight control computer, a Garmin GPS receiver and an attitude heading reference system (AHRS) are proposed. The adopted UAV has about two hours flight duration time with a flight control range of 20 km and has a payload of 3 kg, which is suitable for a medium scale mapping and surveying mission. In the paper, a test area with 21.3 km2 in size containing hundreds of landslides induced by Typhoon Morakot is used for landslides mapping. The flight height is around 1,400 meters and the ground sampling distance of the acquired imagery is about 17 cm. The aerial triangulation, ortho-image generation and mosaicking are applied to the acquired images in advance. An automatic landslides detection algorithm is proposed based on the object-based image analysis (OBIA) technique. The color ortho-image and a digital elevation model (DEM) are used. The ortho-images before and after typhoon are utilized to estimate new landslide regions. Experimental results show that the developed algorithm can achieve a producer's accuracy up to 91%, user's accuracy 84%, and a Kappa index of 0.87. It demonstrates the feasibility of the landslide detection algorithm and the applicability of a fixed-wing UAV for landslide mapping.

  20. Surface Signatures of an Underground Explosion as Captured by Photogrammetry

    NASA Astrophysics Data System (ADS)

    Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.; Cooley, J.; Rougier, E.; Larmat, C. S.; Norskog, K.

    2016-12-01

    This study employed high-resolution photogrammetric modeling to quantify cm-scale surface topographic changes resulting from a 5000kg underground chemical explosion. The test occurred in April 2016 at a depth of 76m within a quartz monzonite intrusion in southern Nevada. The field area was a 210m x 150m polygon broadly centered on the explosion's emplacement hole. A grid of ground control points (GCPs) installed in the field area established control within the collection boundaries and ensured high-resolution digital model parameterization. Using RTK GPS techniques, GCP targets were surveyed in the days before and then again immediately after the underground explosion. A quadcopter UAS with a 12MP camera payload captured overlapping imagery at two flight altitudes (10m and 30m AGL) along automated flight courses for consistency and repeatability. The overlapping imagery was used to generate two digital elevation models, pre-shot and post-shot, for each of the flight altitudes. Spatial analyses of the DEMs and orthoimagery show uplift on the order of 1 to 18cm in the immediate area near ground zero. Other features such as alluvial fracturing appear in the photogrammetric and topographic datasets. Portions of the nearby granite outcrop experienced rock fall and rock rotation. The study detected erosional and depositional features on the test bed and adjacent to it. In addition to vertical change, pre-shot and post-shot surveys of the GCPs suggest evidence for lateral motion on the test bed surface, with movement away from surface ground zero on the order of 1 to 3cm. Results demonstrate that UAS photogrammetry method provides an efficient, high-fidelity, non-invasive method to quantify surface deformation. The photogrammetry data allow quantification of permanent surface deformation and of the spatial extent of damage. These constraints are necessary to develop hydrodynamic and seismic models of explosions that can be verified against recorded seismic data.

  1. Using small unmanned aerial vehicle for instream habitat evaluation and modelling

    NASA Astrophysics Data System (ADS)

    Astegiano, Luca; Vezza, Paolo; Comoglio, Claudio; Lingua, Andrea; Spairani, Michele

    2015-04-01

    Recent advances in digital image collection and processing have led to the increased use of unmanned aerial vehicles (UAV) for river research and management. In this paper, we assess the capabilities of a small UAV to characterize physical habitat for fish in three river stretches of North-Western Italy. The main aim of the study was identifying the advantages and challenges of this technology for environmental river management, in the context of the increasing river exploitation for hydropower production. The UAV used to acquire overlapping images was a small quadcopter with a two different high-resolution (non-metric) cameras (Nikon J1™ and Go-Pro Hero 3 Black Edition™). The quadcopter was preprogrammed to fly set waypoints using a small tablet PC. With the acquired imagery, we constructed a 5-cm resolution orthomosaic image and a digital surface model (DSM). The two products were used to map the distribution of aquatic and riparian habitat features, i.e., wetted area, morphological unit distributions, bathymetry, water surface gradient, substrates and grain sizes, shelters and cover for fish. The study assessed the quality of collected data and used such information to identify key reach-scale metrics and important aspects of fluvial morphology and aquatic habitat. The potential and limitations of using UAV for physical habitat survey were evaluated and the collected data were used to initialize and run common habitat simulation tools (MesoHABSIM). Several advantages of using UAV-based imagery were found, including low cost procedures, high resolution and efficiency in data collection. However, some challenges were identified for bathymetry extraction (vegetation obstructions, white waters, turbidity) and grain size assessment (preprocessing of data and automatic object detection). The application domain and possible limitation for instream habitat mapping were defined and will be used as a reference for future studies. Ongoing activities include the possibility of using topographic data and discharge measurements to extract average values of flow velocity in cross sections.

  2. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  3. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  4. Use of Aerial Hyperspectral Imaging For Monitoring Forest Health

    Treesearch

    Milton O. Smith; Nolan J. Hess; Stephen Gulick; Lori G. Eckhardt; Roger D. Menard

    2004-01-01

    This project evaluates the effectiveness of aerial hyperspectral digital imagery in the assessment of forest health of loblolly stands in central Alabama. The imagery covers 50 square miles, in Bibb and Hale Counties, south of Tuscaloosa, AL, which includes intensive managed forest industry sites and National Forest lands with multiple use objectives. Loblolly stands...

  5. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  6. Feasibility study for automatic reduction of phase change imagery

    NASA Technical Reports Server (NTRS)

    Nossaman, G. O.

    1971-01-01

    The feasibility of automatically reducing a form of pictorial aerodynamic heating data is discussed. The imagery, depicting the melting history of a thin coat of fusible temperature indicator painted on an aerodynamically heated model, was previously reduced by manual methods. Careful examination of various lighting theories and approaches led to an experimentally verified illumination concept capable of yielding high-quality imagery. Both digital and video image processing techniques were applied to reduction of the data, and it was demonstrated that either method can be used to develop superimposed contours. Mathematical techniques were developed to find the model-to-image and the inverse image-to-model transformation using six conjugate points, and methods were developed using these transformations to determine heating rates on the model surface. A video system was designed which is able to reduce the imagery rapidly, economically and accurately. Costs for this system were estimated. A study plan was outlined whereby the mathematical transformation techniques developed to produce model coordinate heating data could be applied to operational software, and methods were discussed and costs estimated for obtaining the digital information necessary for this software.

  7. Evaluation of ERTS-1 image sensor spatial resolution in photographic form

    NASA Technical Reports Server (NTRS)

    Slater, P. N. (Principal Investigator); Schowengerdt, R. A.

    1975-01-01

    The author has identified the following significant results. The digital Optical Transfer Function (OTF) measurements showed the following: (1) there are no significant differences in optical performance, in terms of OTF, among all four bands of the multispectral scanner, (2) no substantial changes in the OTF's of bands 4, 5, and 6 during the period November 1972 to May 1973, and (3) comparison between the photographic and digital (CCT) two-dimensional OTF's indicated a strong asymmetry in the photographic product OTF between the MSS scan direction and across scan direction. The coherent light Fourier analysis program showed the following: (1) for agricultural areas, bands 5 and 7 of the MSS are superior in terms of image definition, and therefore mapping and acreage estimation, (2) amplitude modulation in imagery from MSS bands 4 and 5 is between 65 to 90 percent of that in corresponding bands of Apollo 9 imagery (SO65), and (3) MSS band 5 imagery has a ground resolution between 55 to 75 percent of that exhibited in the corresponding band of Apollo 9 imagery (SO65).

  8. Navigation and Elctro-Optic Sensor Integration Technology for Fusion of Imagery and Digital Mapping Products

    DTIC Science & Technology

    1999-08-01

    Electro - Optic Sensor Integration Technology (NEOSIT) software application. The design is highly modular and based on COTS tools to facilitate integration with sensors, navigation and digital data sources already installed on different host

  9. Monitoring of environmental effects of coal strip mining from satellite imagery

    NASA Technical Reports Server (NTRS)

    Brooks, R. L.; Parra, C. G.

    1976-01-01

    This paper evaluates satellite imagery as a means of monitoring coal strip mines and their environmental effects. The satellite imagery employed is Skylab EREP S-190A and S-190B from SL-2, SL-3 and SL-4 missions; a large variety of camera/film/filter combinations has been reviewed. The investigation includes determining the applicability of satellite imagery for detection of disturbed acreage in areas of coal surface mining as well as the much more detailed monitoring of specific surface-mining operations, including: active mines, inactive mines, highwalls, ramp roads, pits, water impoundments and their associated acidity, graded areas and types of grading, and reclamed areas. Techniques have been developed to enable mining personnel to utilize this imagery in a practical and economic manner, requiring no previous photo-interpretation background and no purchases of expensive viewing or data-analysis equipment. To corroborate the photo-interpretation results, on-site observations were made in the very active mining area near Madisonville, Kentucky.

  10. Concept of a digital aerial platform for conducting observation flights under the open skies treaty. (Polish Title: Koncepcja cyfrowej platformy lotniczej do realizacji misji obserwacyjnych w ramach traktatu o otwartych przestworzach)

    NASA Astrophysics Data System (ADS)

    Walczykowski, P.; Orych, A.

    2013-12-01

    The Treaty on Open Skies, to which Poland is a signatory from the very beginning, was signed in 1992 in Helsinki. The main principle of the Treaty is increasing the openness of military activities conducted by the States-Parties and control over respecting disarmament agreements. Responsibilities given by the Treaty are fulfilled by conducting and receiving a given number of observation flights over the territories of the Treaty signatories. Among the 34 countries currently actively taking part in this Treaty only some own certified airplanes and observation sensors. Poland is within the group of countries who do not own their own platform and therefore fulfills Treaty requirements using the Ukrainian An-30b. Primarily, the Treaty only enabled the use of analogue sensors for the acquisition of imagery data. Together with the development of digital techniques, a rise in the need for digital imagery products had been noted. Currently digital photography is being used in almost ass fields of studies and everyday life. This has lead to very rapid developments in digital sensor technologies, employing the newest and most innovative solutions. Digital imagery products have many advantages and have now almost fully replaced traditional film sensors. Digital technologies have given rise to a new era in Open Skies. The Open Skies Consultative Commission, having conducted many series of tests, signed a new Decision to the Treaty, which allows for digital aerial sensors to be used during observation flights. The main aim of this article is to design a concept of choosing digital sensors and selecting an airplane, therefore a digital aerial platform, which could be used by Poland for Open Skies purposes. A thorough analysis of airplanes currently used by the Polish Air force was conducted in terms of their specifications and the possibility of their employment for Open Skies Treaty missions. Next, an analysis was conducted of the latest aerial digital sensors offered by leading commercial manufacturers. The sensors were analyzed in terms of the accordance of their specifications with the technical requirements of the Treaty.

  11. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.

  12. Development of a camera casing suited for cryogenic and vacuum applications

    NASA Astrophysics Data System (ADS)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  13. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    ERIC Educational Resources Information Center

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  14. Noncontact imaging of plethysmographic pulsation and spontaneous low-frequency oscillation in skin perfusion with a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa

    2016-03-01

    A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.

  15. Detecting personnel around UGVs using stereo vision

    NASA Astrophysics Data System (ADS)

    Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.

    2008-04-01

    Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.

  16. Ikonos Imagery Product Nonuniformity Assessment

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Zanoni, Vicki; Pagnutti, Mary; Holekamp, Kara; Smith, Charles

    2002-01-01

    During the early stages of the NASA Scientific Data Purchase (SDP) program, three approximately equal vertical stripes were observable in the IKONOS imagery of highly spatially uniform sites. Although these effects appeared to be less than a few percent of the mean signal, several investigators requested new imagery. Over time, Space Imaging updated its processing to minimize these artifacts. This however, produced differences in Space Imaging products derived from archive imagery processed at different times. Imagery processed before 2/22/01 is processed with one set of coefficients, while imagery processed after that date requires another set. Space Imaging produces its products from raw imagery, so changes in the ground processing over time can change the delivered digital number (DN) values, even for identical orders of a previously acquired scene. NASA Stennis initiated studies to investigate the magnitude and changes in these artifacts over the lifetime of the system and before and after processing updates.

  17. Possible Extent of Ancient Lake in Gale Crater, Mars

    NASA Image and Video Library

    2013-12-09

    This illustration depicts a concept for the possible extent of an ancient lake inside Gale Crater. The base map combines image data from the Context Camera on NASA Mars Reconnaissance Orbiter and color information from Viking Orbiter imagery.

  18. JPL-19650324-RANGERf-0001-AVC2002151 Ranger 9 Impacts Moon

    NASA Image and Video Library

    1965-03-24

    Ranger 9 was the last of the Ranger series of spacecraft launched in the 1960s to explore the moon and was designed to image and impact the moon's crater Alphonsus. Includes imagery from the onboard cameras.

  19. Direct Visualization of Shock Waves in Supersonic Space Shuttle Flight

    NASA Technical Reports Server (NTRS)

    OFarrell, J. M.; Rieckhoff, T. J.

    2011-01-01

    Direct observation of shock boundaries is rare. This Technical Memorandum describes direct observation of shock waves produced by the space shuttle vehicle during STS-114 and STS-110 in imagery provided by NASA s tracking cameras.

  20. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

Top