Sample records for aerial video imagery

  1. Correction of Line Interleaving Displacement in Frame Captured Aerial Video Imagery

    Treesearch

    B. Cooke; A. Saucier

    1995-01-01

    Scientists with the USDA Forest Service are currently assessing the usefulness of aerial video imagery for various purposes including midcycle inventory updates. The potential of video image data for these purposes may be compromised by scan line interleaving displacement problems. Interleaving displacement problems cause features in video raster datasets to have...

  2. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  3. Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M

    2011-01-01

    Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less

  4. Persistent aerial video registration and fast multi-view mosaicing.

    PubMed

    Molina, Edgardo; Zhu, Zhigang

    2014-05-01

    Capturing aerial imagery at high resolutions often leads to very low frame rate video streams, well under full motion video standards, due to bandwidth, storage, and cost constraints. Low frame rates make registration difficult when an aircraft is moving at high speeds or when global positioning system (GPS) contains large errors or it fails. We present a method that takes advantage of persistent cyclic video data collections to perform an online registration with drift correction. We split the persistent aerial imagery collection into individual cycles of the scene, identify and correct the registration errors on the first cycle in a batch operation, and then use the corrected base cycle as a reference pass to register and correct subsequent passes online. A set of multi-view panoramic mosaics is then constructed for each aerial pass for representation, presentation and exploitation of the 3D dynamic scene. These sets of mosaics are all in alignment to the reference cycle allowing their direct use in change detection, tracking, and 3D reconstruction/visualization algorithms. Stereo viewing with adaptive baselines and varying view angles is realized by choosing a pair of mosaics from a set of multi-view mosaics. Further, the mosaics for the second pass and later can be generated and visualized online as their is no further batch error correction.

  5. Usability of aerial video footage for 3-D scene reconstruction and structural damage assessment

    NASA Astrophysics Data System (ADS)

    Cusicanqui, Johnny; Kerle, Norman; Nex, Francesco

    2018-06-01

    Remote sensing has evolved into the most efficient approach to assess post-disaster structural damage, in extensively affected areas through the use of spaceborne data. For smaller, and in particular, complex urban disaster scenes, multi-perspective aerial imagery obtained with unmanned aerial vehicles and derived dense color 3-D models are increasingly being used. These type of data allow the direct and automated recognition of damage-related features, supporting an effective post-disaster structural damage assessment. However, the rapid collection and sharing of multi-perspective aerial imagery is still limited due to tight or lacking regulations and legal frameworks. A potential alternative is aerial video footage, which is typically acquired and shared by civil protection institutions or news media and which tends to be the first type of airborne data available. Nevertheless, inherent artifacts and the lack of suitable processing means have long limited its potential use in structural damage assessment and other post-disaster activities. In this research the usability of modern aerial video data was evaluated based on a comparative quality and application analysis of video data and multi-perspective imagery (photos), and their derivative 3-D point clouds created using current photogrammetric techniques. Additionally, the effects of external factors, such as topography and the presence of smoke and moving objects, were determined by analyzing two different earthquake-affected sites: Tainan (Taiwan) and Pescara del Tronto (Italy). Results demonstrated similar usabilities for video and photos. This is shown by the short 2 cm of difference between the accuracies of video- and photo-based 3-D point clouds. Despite the low video resolution, the usability of these data was compensated for by a small ground sampling distance. Instead of video characteristics, low quality and application resulted from non-data-related factors, such as changes in the scene, lack of

  6. Advanced Image Processing of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn; Jobson, Daniel J.; Rahman, Zia-ur; Hines, Glenn

    2006-01-01

    Aerial imagery of the Earth is an invaluable tool for the assessment of ground features, especially during times of disaster. Researchers at the NASA Langley Research Center have developed techniques which have proven to be useful for such imagery. Aerial imagery from various sources, including Langley's Boeing 757 Aries aircraft, has been studied extensively. This paper discusses these studies and demonstrates that better-than-observer imagery can be obtained even when visibility is severely compromised. A real-time, multi-spectral experimental system will be described and numerous examples will be shown.

  7. Integration of aerial oblique imagery and terrestrial imagery for optimized 3D modeling in urban areas

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Xie, Linfu; Hu, Han; Zhu, Qing; Yau, Eric

    2018-05-01

    Photorealistic three-dimensional (3D) models are fundamental to the spatial data infrastructure of a digital city, and have numerous potential applications in areas such as urban planning, urban management, urban monitoring, and urban environmental studies. Recent developments in aerial oblique photogrammetry based on aircraft or unmanned aerial vehicles (UAVs) offer promising techniques for 3D modeling. However, 3D models generated from aerial oblique imagery in urban areas with densely distributed high-rise buildings may show geometric defects and blurred textures, especially on building façades, due to problems such as occlusion and large camera tilt angles. Meanwhile, mobile mapping systems (MMSs) can capture terrestrial images of close-range objects from a complementary view on the ground at a high level of detail, but do not offer full coverage. The integration of aerial oblique imagery with terrestrial imagery offers promising opportunities to optimize 3D modeling in urban areas. This paper presents a novel method of integrating these two image types through automatic feature matching and combined bundle adjustment between them, and based on the integrated results to optimize the geometry and texture of the 3D models generated from aerial oblique imagery. Experimental analyses were conducted on two datasets of aerial and terrestrial images collected in Dortmund, Germany and in Hong Kong. The results indicate that the proposed approach effectively integrates images from the two platforms and thereby improves 3D modeling in urban areas.

  8. Knowledge-based understanding of aerial surveillance video

    NASA Astrophysics Data System (ADS)

    Cheng, Hui; Butler, Darren

    2006-05-01

    Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.

  9. Low-cost Tools for Aerial Video Geolocation and Air Traffic Analysis for Delay Reduction Using Google Earth

    NASA Astrophysics Data System (ADS)

    Zetterlind, V.; Pledgie, S.

    2009-12-01

    Low-cost, low-latency, robust geolocation and display of aerial video is a common need for a wide range of earth observing as well as emergency response and security applications. While hardware costs for aerial video collection systems, GPS, and inertial sensors have been decreasing, software costs for geolocation algorithms and reference imagery/DTED remain expensive and highly proprietary. As part of a Federal Small Business Innovative Research project, MosaicATM and EarthNC, Inc have developed a simple geolocation system based on the Google Earth API and Google's 'built-in' DTED and reference imagery libraries. This system geolocates aerial video based on platform and camera position, attitude, and field-of-view metadata using geometric photogrammetric principles of ray-intersection with DTED. Geolocated video can be directly rectified and viewed in the Google Earth API during processing. Work is underway to extend our geolocation code to NASA World Wind for additional flexibility and a fully open-source platform. In addition to our airborne remote sensing work, MosaicATM has developed the Surface Operations Data Analysis and Adaptation (SODAA) tool, funded by NASA Ames, which supports analysis of airport surface operations to optimize aircraft movements and reduce fuel burn and delays. As part of SODAA, MosaicATM and EarthNC, Inc have developed powerful tools to display national airspace data and time-animated 3D flight tracks in Google Earth for 4D analysis. The SODAA tool can convert raw format flight track data, FAA National Flight Data (NFD), and FAA 'Adaptation' airport surface data to a spatial database representation and then to Google Earth KML. The SODAA client provides users with a simple graphical interface through which to generate queries with a wide range of predefined and custom filters, plot results, and export for playback in Google Earth in conjunction with NFD and Adaptation overlays.

  10. Urban cover mapping using digital, high-resolution aerial imagery

    Treesearch

    Soojeong Myeong; David J. Nowak; Paul F. Hopkins; Robert H. Brock

    2003-01-01

    High-spatial resolution digital color-infrared aerial imagery of Syracuse, NY was analyzed to test methods for developing land cover classifications for an urban area. Five cover types were mapped: tree/shrub, grass/herbaceous, bare soil, water and impervious surface. Challenges in high-spatial resolution imagery such as shadow effect and similarity in spectral...

  11. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    PubMed Central

    Hammoud, Riad I.; Sahin, Cem S.; Blasch, Erik P.; Rhodes, Bradley J.; Wang, Tao

    2014-01-01

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453

  12. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    PubMed

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-10-22

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  13. Evaluation of orthomosics and digital surface models derived from aerial imagery for crop mapping

    USDA-ARS?s Scientific Manuscript database

    Orthomosics derived from aerial imagery acquired by consumer-grade cameras have been used for crop mapping. However, digital surface models (DSM) derived from aerial imagery have not been evaluated for this application. In this study, a novel method was proposed to extract crop height from DSM and t...

  14. Agency Video, Audio and Imagery Library

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2015-01-01

    The purpose of this presentation was to inform the ISS International Partners of the new NASA Agency Video, Audio and Imagery Library (AVAIL) website. AVAIL is a new resource for the public to search for and download NASA-related imagery, and is not intended to replace the current process by which the International Partners receive their Space Station imagery products.

  15. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery.

    PubMed

    Casado, Monica Rivas; Gonzalez, Rocio Ballesteros; Kriechbaumer, Thomas; Veal, Amanda

    2015-11-04

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management.

  16. Absolute High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-Time Aerial Video Imagery for Geo-referenced Orthophoto Registration

    NASA Astrophysics Data System (ADS)

    Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter

    This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.

  17. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  18. Environmental applications utilizing digital aerial imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monday, H.M.

    1995-06-01

    This paper discusses the use of satellite imagery, aerial photography, and computerized airborne imagery as applied to environmental mapping, analysis, and monitoring. A project conducted by the City of Irving, Texas involves compliance with national pollutant discharge elimination system (NPDES) requirements stipulated by the Environmental Protection Agency. The purpose of the project was the development and maintenance of a stormwater drainage utility. Digital imagery was collected for a portion of the city to map the City`s porous and impervious surfaces which will then be overlaid with property boundaries in the City`s existing Geographic information System (GIS). This information will allowmore » the City to determine an equitable tax for each land parcel according to the amount of water each parcel is contributing to the stormwater system. Another project involves environmental compliance for warm water discharges created by utility companies. Environmental consultants are using digital airborne imagery to analyze thermal plume affects as well as monitoring power generation facilities. A third project involves wetland restoration. Due to freeway and other forms of construction, plus a major reduction of fresh water supplies, the Southern California coastal wetlands are being seriously threatened. These wetlands, rich spawning grounds for plant and animal life, are home to thousands of waterfowl and shore birds who use this habitat for nesting and feeding grounds. Under the leadership of Southern California Edison (SCE) and CALTRANS (California Department of Transportation), several wetland areas such as the San Dieguito Lagoon (Del Mar, California), the Sweetwater Marsh (San Diego, California), and the Tijuana Estuary (San Diego, California) are being restored and closely monitored using digital airborne imagery.« less

  19. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  20. The effects of gender and music video imagery on sexual attitudes.

    PubMed

    Kalof, L

    1999-06-01

    This study examined the influence of gender and exposure to gender-stereo-typed music video imagery on sexual attitudes (adversarial sexual beliefs, acceptance of rape myths, acceptance of interpersonal violence, and gender role stereotyping). A group of 44 U.S. college students were randomly assigned to 1 of 2 groups that viewed either a video portraying stereotyped sexual imagery or a video that excluded all sexual images. Exposure to traditional sexual imagery had a significant main effect on attitudes about adversarial sexual relationships, and gender had main effects on 3 of 4 sexual attitudes. There was some evidence of an interaction between gender and exposure to traditional sexual imagery on the acceptance of interpersonal violence.

  1. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  2. Career Profile- Jim Ross, Aerial Photographer

    NASA Image and Video Library

    2016-12-21

    Check out what it takes to “capture the moment” at Mach speeds. The stunning aerial imagery of NASA Armstrong Flight Research Center comes from well-skilled photographers like Jim Ross, Photo Lead. This career profile video highlights Jim’s job responsibilities in documenting aircraft hardware installations, aerial research, and mission work that happens both on center and around the world. During Jim’s 27-year career, he has logged over 800 flight hours in twelve different types of aircraft.

  3. Systematic evaluation of deep learning based detection frameworks for aerial imagery

    NASA Astrophysics Data System (ADS)

    Sommer, Lars; Steinmann, Lucas; Schumann, Arne; Beyerer, Jürgen

    2018-04-01

    Object detection in aerial imagery is crucial for many applications in the civil and military domain. In recent years, deep learning based object detection frameworks significantly outperformed conventional approaches based on hand-crafted features on several datasets. However, these detection frameworks are generally designed and optimized for common benchmark datasets, which considerably differ from aerial imagery especially in object sizes. As already demonstrated for Faster R-CNN, several adaptations are necessary to account for these differences. In this work, we adapt several state-of-the-art detection frameworks including Faster R-CNN, R-FCN, and Single Shot MultiBox Detector (SSD) to aerial imagery. We discuss adaptations that mainly improve the detection accuracy of all frameworks in detail. As the output of deeper convolutional layers comprise more semantic information, these layers are generally used in detection frameworks as feature map to locate and classify objects. However, the resolution of these feature maps is insufficient for handling small object instances, which results in an inaccurate localization or incorrect classification of small objects. Furthermore, state-of-the-art detection frameworks perform bounding box regression to predict the exact object location. Therefore, so called anchor or default boxes are used as reference. We demonstrate how an appropriate choice of anchor box sizes can considerably improve detection performance. Furthermore, we evaluate the impact of the performed adaptations on two publicly available datasets to account for various ground sampling distances or differing backgrounds. The presented adaptations can be used as guideline for further datasets or detection frameworks.

  4. Aerial video mosaicking using binary feature tracking

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2015-05-01

    Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point's image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.

  5. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  6. Classification of a wetland area along the upper Mississippi River with aerial videography

    USGS Publications Warehouse

    Jennings, C.A.; Vohs, P.A.; Dewey, M.R.

    1992-01-01

    We evaluated the use of aerial videography for classifying wetland habitats along the upper Mississippi River and found the prompt availability of habitat feature maps to be the major advantage of the video imagery technique. We successfully produced feature maps from digitized video images that generally agreed with the known distribution and areal coverages of the major habitat types independently identified and quantified with photointerpretation techniques. However, video images were not sufficiently detailed to allow us to consistently discriminate among the classes of aquatic macrophytes present or to quantify their areal coverage. Our inability to consistently distinguish among emergent, floating, and submergent macrophytes from the feature maps may have been related to the structural complexity of the site, to our limited vegetation sampling, and to limitations in video imagery. We expect that careful site selection (i.e., the desired level of resolution is available from video imagery) and additional vegetation samples (e.g., along a transect) will allow improved assignment of spectral values to specific plant types and enhance plant classification from feature maps produced from video imagery.

  7. Unmanned Aerial Vehicles Produce High-Resolution Seasonally-Relevant Imagery for Classifying Wetland Vegetation

    NASA Astrophysics Data System (ADS)

    Marcaccio, J. V.; Markle, C. E.; Chow-Fraser, P.

    2015-08-01

    With recent advances in technology, personal aerial imagery acquired with unmanned aerial vehicles (UAVs) has transformed the way ecologists can map seasonal changes in wetland habitat. Here, we use a multi-rotor (consumer quad-copter, the DJI Phantom 2 Vision+) UAV to acquire a high-resolution (< 8 cm) composite photo of a coastal wetland in summer 2014. Using validation data collected in the field, we determine if a UAV image and SWOOP (Southwestern Ontario Orthoimagery Project) image (collected in spring 2010) differ in their classification of type of dominant vegetation type and percent cover of three plant classes: submerged aquatic vegetation, floating aquatic vegetation, and emergent vegetation. The UAV imagery was more accurate than available SWOOP imagery for mapping percent cover of submergent and floating vegetation categories, but both were able to accurately determine the dominant vegetation type and percent cover of emergent vegetation. Our results underscore the value and potential for affordable UAVs (complete quad-copter system < 3,000 CAD) to revolutionize the way ecologists obtain imagery and conduct field research. In Canada, new UAV regulations make this an easy and affordable way to obtain multiple high-resolution images of small (< 1.0 km2) wetlands, or portions of larger wetlands throughout a year.

  8. Very Large Scale Aerial (VLSA) imagery for assessing postfire bitterbrush recovery

    Treesearch

    Corey A. Moffet; J. Bret Taylor; D. Terrance Booth

    2008-01-01

    Very large scale aerial (VLSA) imagery is an efficient tool for monitoring bare ground and cover on extensive rangelands. This study was conducted to determine whether VLSA images could be used to detect differences in antelope bitterbrush (Purshia tridentata Pursh DC) cover and density among similar ecological sites with varying postfire recovery...

  9. Canopy Density Mapping on Ultracam-D Aerial Imagery in Zagros Woodlands, Iran

    NASA Astrophysics Data System (ADS)

    Erfanifard, Y.; Khodaee, Z.

    2013-09-01

    Canopy density maps express different characteristics of forest stands, especially in woodlands. Obtaining such maps by field measurements is so expensive and time-consuming. It seems necessary to find suitable techniques to produce these maps to be used in sustainable management of woodland ecosystems. In this research, a robust procedure was suggested to obtain these maps by very high spatial resolution aerial imagery. It was aimed to produce canopy density maps by UltraCam-D aerial imagery, newly taken in Zagros woodlands by Iran National Geographic Organization (NGO), in this study. A 30 ha plot of Persian oak (Quercus persica) coppice trees was selected in Zagros woodlands, Iran. The very high spatial resolution aerial imagery of the plot purchased from NGO, was classified by kNN technique and the tree crowns were extracted precisely. The canopy density was determined in each cell of different meshes with different sizes overlaid on the study area map. The accuracy of the final maps was investigated by the ground truth obtained by complete field measurements. The results showed that the proposed method of obtaining canopy density maps was efficient enough in the study area. The final canopy density map obtained by a mesh with 30 Ar (3000 m2) cell size had 80% overall accuracy and 0.61 KHAT coefficient of agreement which shows a great agreement with the observed samples. This method can also be tested in other case studies to reveal its capability in canopy density map production in woodlands.

  10. Detecting blind building façades from highly overlapping wide angle aerial imagery

    NASA Astrophysics Data System (ADS)

    Burochin, Jean-Pascal; Vallet, Bruno; Brédif, Mathieu; Mallet, Clément; Brosset, Thomas; Paparoditis, Nicolas

    2014-10-01

    This paper deals with the identification of blind building façades, i.e. façades which have no openings, in wide angle aerial images with a decimeter pixel size, acquired by nadir looking cameras. This blindness characterization is in general crucial for real estate estimation and has, at least in France, a particular importance on the evaluation of legal permission of constructing on a parcel due to local urban planning schemes. We assume that we have at our disposal an aerial survey with a relatively high stereo overlap along-track and across-track and a 3D city model of LoD 1, that can have been generated with the input images. The 3D model is textured with the aerial imagery by taking into account the 3D occlusions and by selecting for each façade the best available resolution texture seeing the whole façade. We then parse all 3D façades textures by looking for evidence of openings (windows or doors). This evidence is characterized by a comprehensive set of basic radiometric and geometrical features. The blindness prognostic is then elaborated through an (SVM) supervised classification. Despite the relatively low resolution of the images, we reach a classification accuracy of around 85% on decimeter resolution imagery with 60 × 40 % stereo overlap. On the one hand, we show that the results are very sensitive to the texturing resampling process and to vegetation presence on façade textures. On the other hand, the most relevant features for our classification framework are related to texture uniformity and horizontal aspect and to the maximal contrast of the opening detections. We conclude that standard aerial imagery used to build 3D city models can also be exploited to some extent and at no additional cost for facade blindness characterisation.

  11. Monitoring spotted knapweed with very-large-scale-aerial imagery in sagebrush-dominated rangelands.

    USDA-ARS?s Scientific Manuscript database

    Spotted knapweed (Centaurea stoebe L.) invades and destroys productive rangelands. Monitoring weed infestations across extensive and remote landscapes can be difficult and costly. We evaluated the efficacy of very-large-scale-aerial (VLSA) imagery for detection and quantification of spotted knapwee...

  12. A workflow for extracting plot-level biophysical indicators from aerially acquired multispectral imagery

    USDA-ARS?s Scientific Manuscript database

    Advances in technologies associated with unmanned aerial vehicles (UAVs) has allowed for researchers, farmers and agribusinesses to incorporate UAVs coupled with various imaging systems into data collection activities and aid expert systems for making decisions. Multispectral imageries allow for a q...

  13. Google Haul Out: Earth Observation Imagery and Digital Aerial Surveys in Coastal Wildlife Management and Abundance Estimation

    PubMed Central

    Moxley, Jerry H.; Bogomolni, Andrea; Hammill, Mike O.; Moore, Kathleen M. T.; Polito, Michael J.; Sette, Lisa; Sharp, W. Brian; Waring, Gordon T.; Gilbert, James R.; Halpin, Patrick N.; Johnston, David W.

    2017-01-01

    Abstract As the sampling frequency and resolution of Earth observation imagery increase, there are growing opportunities for novel applications in population monitoring. New methods are required to apply established analytical approaches to data collected from new observation platforms (e.g., satellites and unmanned aerial vehicles). Here, we present a method that estimates regional seasonal abundances for an understudied and growing population of gray seals (Halichoerus grypus) in southeastern Massachusetts, using opportunistic observations in Google Earth imagery. Abundance estimates are derived from digital aerial survey counts by adapting established correction-based analyses with telemetry behavioral observation to quantify survey biases. The result is a first regional understanding of gray seal abundance in the northeast US through opportunistic Earth observation imagery and repurposed animal telemetry data. As species observation data from Earth observation imagery become more ubiquitous, such methods provide a robust, adaptable, and cost-effective solution to monitoring animal colonies and understanding species abundances. PMID:29599542

  14. Google Haul Out: Earth Observation Imagery and Digital Aerial Surveys in Coastal Wildlife Management and Abundance Estimation.

    PubMed

    Moxley, Jerry H; Bogomolni, Andrea; Hammill, Mike O; Moore, Kathleen M T; Polito, Michael J; Sette, Lisa; Sharp, W Brian; Waring, Gordon T; Gilbert, James R; Halpin, Patrick N; Johnston, David W

    2017-08-01

    As the sampling frequency and resolution of Earth observation imagery increase, there are growing opportunities for novel applications in population monitoring. New methods are required to apply established analytical approaches to data collected from new observation platforms (e.g., satellites and unmanned aerial vehicles). Here, we present a method that estimates regional seasonal abundances for an understudied and growing population of gray seals (Halichoerus grypus) in southeastern Massachusetts, using opportunistic observations in Google Earth imagery. Abundance estimates are derived from digital aerial survey counts by adapting established correction-based analyses with telemetry behavioral observation to quantify survey biases. The result is a first regional understanding of gray seal abundance in the northeast US through opportunistic Earth observation imagery and repurposed animal telemetry data. As species observation data from Earth observation imagery become more ubiquitous, such methods provide a robust, adaptable, and cost-effective solution to monitoring animal colonies and understanding species abundances.

  15. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  16. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis

    DTIC Science & Technology

    1989-08-01

    Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17

  17. Mapping the Land: Aerial Imagery for Land Use Information. Resource Publications in Geography.

    ERIC Educational Resources Information Center

    Campbell, James B.

    Intended for geography students who are enrolled in, or who have completed, an introductory course in remote sensing; for geography researchers; and for professors; this publication focuses specifically on those general issues regarding the organization and presentation of land use information derived from aerial imagery. Many of the ideas…

  18. Estimating Discharge in Low-Order Rivers With High-Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    King, Tyler V.; Neilson, Bethany T.; Rasmussen, Mitchell T.

    2018-02-01

    Remote sensing of river discharge promises to augment in situ gauging stations, but the majority of research in this field focuses on large rivers (>50 m wide). We present a method for estimating volumetric river discharge in low-order (<50 m wide) rivers from remotely sensed data by coupling high-resolution imagery with one-dimensional hydraulic modeling at so-called virtual gauging stations. These locations were identified as locations where the river contracted under low flows, exposing a substantial portion of the river bed. Topography of the exposed river bed was photogrammetrically extracted from high-resolution aerial imagery while the geometry of the remaining inundated portion of the channel was approximated based on adjacent bank topography and maximum depth assumptions. Full channel bathymetry was used to create hydraulic models that encompassed virtual gauging stations. Discharge for each aerial survey was estimated with the hydraulic model by matching modeled and remotely sensed wetted widths. Based on these results, synthetic width-discharge rating curves were produced for each virtual gauging station. In situ observations were used to determine the accuracy of wetted widths extracted from imagery (mean error 0.36 m), extracted bathymetry (mean vertical RMSE 0.23 m), and discharge (mean percent error 7% with a standard deviation of 6%). Sensitivity analyses were conducted to determine the influence of inundated channel bathymetry and roughness parameters on estimated discharge. Comparison of synthetic rating curves produced through sensitivity analyses show that reasonable ranges of parameter values result in mean percent errors in predicted discharges of 12%-27%.

  19. Mapping snow depth in complex alpine terrain with close range aerial imagery - estimating the spatial uncertainties of repeat autonomous aerial surveys over an active rock glacier

    NASA Astrophysics Data System (ADS)

    Goetz, Jason; Marcer, Marco; Bodin, Xavier; Brenning, Alexander

    2017-04-01

    Snow depth mapping in open areas using close range aerial imagery is just one of the many cases where developments in structure-from-motion and multi-view-stereo (SfM-MVS) 3D reconstruction techniques have been applied for geosciences - and with good reason. Our ability to increase the spatial resolution and frequency of observations may allow us to improve our understanding of how snow depth distribution varies through space and time. However, to ensure accurate snow depth observations from close range sensing we must adequately characterize the uncertainty related to our measurement techniques. In this study, we explore the spatial uncertainties of snow elevation models for estimation of snow depth in a complex alpine terrain from close range aerial imagery. We accomplish this by conducting repeat autonomous aerial surveys over a snow-covered active-rock glacier located in the French Alps. The imagery obtained from each flight of an unmanned aerial vehicle (UAV) is used to create an individual digital elevation model (DEM) of the snow surface. As result, we obtain multiple DEMs of the snow surface for the same site. These DEMs are obtained from processing the imagery with the photogrammetry software Agisoft Photoscan. The elevation models are also georeferenced within Photoscan using the geotagged imagery from an onboard GNSS in combination with ground targets placed around the rock glacier, which have been surveyed with highly accurate RTK-GNSS equipment. The random error associated with multi-temporal DEMs of the snow surface is estimated from the repeat aerial survey data. The multiple flights are designed to follow the same flight path and altitude above the ground to simulate the optimal conditions of repeat survey of the site, and thus try to estimate the maximum precision associated with our snow-elevation measurement technique. The bias of the DEMs is assessed with RTK-GNSS survey observations of the snow surface elevation of the area on and surrounding

  20. Pricise Target Geolocation and Tracking Based on Uav Video Imagery

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    There is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this study compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.

  1. Operator Selection for Unmanned Aerial Vehicle Operators: A Comparison of Video Game Players and Manned Aircraft Pilots

    DTIC Science & Technology

    2009-11-01

    AFRL-RH-WP-TR-2010-0057 Operator Selection for Unmanned Aerial Vehicle Operators: A Comparison of Video Game Players and Manned Aircraft...Oct-2008 - 30-Nov-2009 4. TITLE AND SUBTITLE Operator Selection for Unmanned Aerial Vehicle Operators: A Comparison of Video Game Players...training regimens leading to a potential shortage of qualified UAS pilots. This study attempted to discover whether video game players (VGPs) possess

  2. Improving Measurement of Forest Structural Parameters by Co-Registering of High Resolution Aerial Imagery and Low Density LiDAR Data

    PubMed Central

    Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan

    2009-01-01

    Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data. PMID:22573971

  3. Fusion of monocular cues to detect man-made structures in aerial imagery

    NASA Technical Reports Server (NTRS)

    Shufelt, Jefferey; Mckeown, David M.

    1991-01-01

    The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.

  4. Parabolic dune reactivation and migration at Napeague, NY, USA: Insights from aerial and GPR imagery

    NASA Astrophysics Data System (ADS)

    Girardi, James D.; Davis, Dan M.

    2010-02-01

    Observations from mapping since the 19th century and aerial imagery since 1930 have been used to study changes in the aeolian geomorphology of coastal parabolic dunes over the last ~ 170 years in the Walking Dune Field, Napeague, NY. The five large parabolic dunes of the Walking Dune Field have all migrated across, or are presently interacting with, a variably forested area that has affected their migration, stabilization and morphology. This study has concentrated on a dune with a particularly complex history of stabilization, reactivation and migration. We have correlated that dune's surface evolution, as revealed by aerial imagery, with its internal structures imaged using 200 MHz and 500 MHz Ground Penetrating Radar (GPR) surveys. Both 2D (transect) and high-resolution 3D GPR imagery image downwind dipping bedding planes which can be grouped by apparent dip angle into several discrete packages of beds that reflect distinct decadal-scale episodes of dune reactivation and growth. From aerial and high resolution GPR imagery, we document a unique mode of reactivation and migration linked to upwind dune formation and parabolic dune interactions with forest trees. This study documents how dune-dune and dune-vegetation interactions have influenced a unique mode of blowout deposition that has alternated on a decadal scale between opposite sides of a parabolic dune during reactivation and migration. The pattern of recent parabolic dune reactivation and migration in the Walking Dune Field appears to be somewhat more complex, and perhaps more sensitive to subtle environmental pressures, than an idealized growth model with uniform deposition and purely on-axis migration. This pattern, believed to be prevalent among other parabolic dunes in the Walking Dune Field, may occur also in many other places where similar observational constraints are unavailable.

  5. Wildlife Multispecies Remote Sensing Using Visible and Thermal Infrared Imagery Acquired from AN Unmanned Aerial Vehicle (uav)

    NASA Astrophysics Data System (ADS)

    Chrétien, L.-P.; Théau, J.; Ménard, P.

    2015-08-01

    Wildlife aerial surveys require time and significant resources. Multispecies detection could reduce costs to a single census for species that coexist spatially. Traditional methods are demanding for observers in terms of concentration and are not adapted to multispecies censuses. The processing of multispectral aerial imagery acquired from an unmanned aerial vehicle (UAV) represents a potential solution for multispecies detection. The method used in this study is based on a multicriteria object-based image analysis applied on visible and thermal infrared imagery acquired from a UAV. This project aimed to detect American bison, fallow deer, gray wolves, and elks located in separate enclosures with a known number of individuals. Results showed that all bison and elks were detected without errors, while for deer and wolves, 0-2 individuals per flight line were mistaken with ground elements or undetected. This approach also detected simultaneously and separately the four targeted species even in the presence of other untargeted ones. These results confirm the potential of multispectral imagery acquired from UAV for wildlife census. Its operational application remains limited to small areas related to the current regulations and available technology. Standardization of the workflow will help to reduce time and expertise requirements for such technology.

  6. Draper Laboratory small autonomous aerial vehicle

    NASA Astrophysics Data System (ADS)

    DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.

    1997-06-01

    The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.

  7. Observation of wave celerity evolution in the nearshore using digital video imagery

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.

    2008-12-01

    Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.

  8. Unsupervised building detection from irregularly spaced LiDAR and aerial imagery

    NASA Astrophysics Data System (ADS)

    Shorter, Nicholas Sven

    As more data sources containing 3-D information are becoming available, an increased interest in 3-D imaging has emerged. Among these is the 3-D reconstruction of buildings and other man-made structures. A necessary preprocessing step is the detection and isolation of individual buildings that subsequently can be reconstructed in 3-D using various methodologies. Applications for both building detection and reconstruction have commercial use for urban planning, network planning for mobile communication (cell phone tower placement), spatial analysis of air pollution and noise nuisances, microclimate investigations, geographical information systems, security services and change detection from areas affected by natural disasters. Building detection and reconstruction are also used in the military for automatic target recognition and in entertainment for virtual tourism. Previously proposed building detection and reconstruction algorithms solely utilized aerial imagery. With the advent of Light Detection and Ranging (LiDAR) systems providing elevation data, current algorithms explore using captured LiDAR data as an additional feasible source of information. Additional sources of information can lead to automating techniques (alleviating their need for manual user intervention) as well as increasing their capabilities and accuracy. Several building detection approaches surveyed in the open literature have fundamental weaknesses that hinder their use; such as requiring multiple data sets from different sensors, mandating certain operations to be carried out manually, and limited functionality to only being able to detect certain types of buildings. In this work, a building detection system is proposed and implemented which strives to overcome the limitations seen in existing techniques. The developed framework is flexible in that it can perform building detection from just LiDAR data (first or last return), or just nadir, color aerial imagery. If data from both LiDAR and

  9. Tobacco imagery in video games: ratings and gamer recall.

    PubMed

    Forsyth, Susan R; Malone, Ruth E

    2016-09-01

    To assess whether tobacco content found in video games was appropriately labelled for tobacco-related content by the Entertainment and Software Ratings Board (ESRB). Sixty-five gamer participants (self-identified age range 13-50) were interviewed in-person (n=25) or online (n=40) and asked (A) to list favourite games and (B) to name games that they could recall containing tobacco content. The ESRB database was searched for all games mentioned to ascertain whether they had been assigned tobacco-related content descriptors. Games were independently assessed for tobacco content by examining user-created game wiki sites and watching YouTube videos of gameplay. Games with tobacco-related ESRB content descriptors and/or with tobacco imagery verified by researchers were considered to contain tobacco content. Games identified by participants as including tobacco but lacking verifiable tobacco content were treated as not containing tobacco content. Participants recalled playing 140 unique games, of which 118 were listed in the ESRB database. Participants explicitly recalled tobacco content in 31% (37/118) of the games, of which 94% (35/37) included independently verified tobacco content. Only 8% (9/118) of the games had received ESRB tobacco-related content descriptors, but researchers verified that 42% (50/118) contained such content; 42% (49/118) of games were rated 'M' for mature (content deemed appropriate for ages 17+). Of these, 76% (37/49) contained verified tobacco content; however, only 4% (2/49) received ESRB tobacco-related content descriptors. Gamers are exposed to tobacco imagery in many video games. The ESRB is not a reliable source for determining whether video games contain tobacco imagery. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. Evaluation of the use of live aerial video for traffic management.

    DOT National Transportation Integrated Search

    1995-01-01

    This report describes the evaluation of an intelligent transportation system (ITS) demonstration project in which live aerial video of traffic conditions was captured by a rotary wing aircraft operated by the Fairfax County (Virginia) Police Departme...

  11. Vehicle detection from very-high-resolution (VHR) aerial imagery using attribute belief propagation (ABP)

    NASA Astrophysics Data System (ADS)

    Wang, Yanli; Li, Ying; Zhang, Li; Huang, Yuchun

    2016-10-01

    With the popularity of very-high-resolution (VHR) aerial imagery, the shape, color, and context attribute of vehicles are better characterized. Due to the various road surroundings and imaging conditions, vehicle attributes could be adversely affected so that vehicle is mistakenly detected or missed. This paper is motivated to robustly extract the rich attribute feature for detecting the vehicles of VHR imagery under different scenarios. Based on the hierarchical component tree of vehicle context, attribute belief propagation (ABP) is proposed to detect salient vehicles from the statistical perspective. With the Max-tree data structure, the multi-level component tree around the road network is efficiently created. The spatial relationship between vehicle and its belonging context is established with the belief definition of vehicle attribute. To effectively correct single-level belief error, the inter-level belief linkages enforce consistency of belief assignment between corresponding components at different levels. ABP starts from an initial set of vehicle belief calculated by vehicle attribute, and then iterates through each component by applying inter-level belief passing until convergence. The optimal value of vehicle belief of each component is obtained via minimizing its belief function iteratively. The proposed algorithm is tested on a diverse set of VHR imagery acquired in the city and inter-city areas of the West and South China. Experimental results show that the proposed algorithm can detect vehicle efficiently and suppress the erroneous effectively. The proposed ABP framework is promising to robustly classify the vehicles from VHR Aerial imagery.

  12. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  13. Practical use of video imagery in nearshore oceanographic field studies

    USGS Publications Warehouse

    Holland, K.T.; Holman, R.A.; Lippmann, T.C.; Stanley, J.; Plant, N.

    1997-01-01

    An approach was developed for using video imagery to quantify, in terms of both spatial and temporal dimensions, a number of naturally occurring (nearshore) physical processes. The complete method is presented, including the derivation of the geometrical relationships relating image and ground coordinates, principles to be considered when working with video imagery and the two-step strategy for calibration of the camera model. The techniques are founded on the principles of photogrammetry, account for difficulties inherent in the use of video signals, and have been adapted to allow for flexibility of use in field studies. Examples from field experiments indicate that this approach is both accurate and applicable under the conditions typically experienced when sampling in coastal regions. Several applications of the camera model are discussed, including the measurement of nearshore fluid processes, sand bar length scales, foreshore topography, and drifter motions. Although we have applied this method to the measurement of nearshore processes and morphologic features, these same techniques are transferable to studies in other geophysical settings.

  14. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.

    PubMed

    Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei

    2018-01-01

    Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.

  15. Pricise Target Geolocation Based on Integeration of Thermal Video Imagery and Rtk GPS in Uavs

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadras Javan, F.

    2015-12-01

    There are an increasingly large number of uses for Unmanned Aerial Vehicles (UAVs) from surveillance, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy which implicates that it cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using a linear Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors and Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process.

  16. Forest fuel treatment detection using multi-temporal airborne Lidar data and high resolution aerial imagery ---- A case study at Sierra Nevada, California

    NASA Astrophysics Data System (ADS)

    Su, Y.; Guo, Q.; Collins, B.; Fry, D.; Kelly, M.

    2014-12-01

    Forest fuel treatments (FFT) are often employed in Sierra Nevada forest (located in California, US) to enhance forest health, regulate stand density, and reduce wildfire risk. However, there have been concerns that FFTs may have negative impacts on certain protected wildlife species. Due to the constraints and protection of resources (e.g., perennial streams, cultural resources, wildlife habitat, etc.), the actual FFT extents are usually different from planned extents. Identifying the actual extent of treated areas is of primary importance to understand the environmental influence of FFTs. Light detection and ranging (Lidar) is a powerful remote sensing technique that can provide accurate forest structure measurements, which provides great potential to monitor forest changes. This study used canopy height model (CHM) and canopy cover (CC) products derived from multi-temporal airborne Lidar data to detect FFTs by an approach combining a pixel-wise thresholding method and a object-of-interest segmentation method. We also investigated forest change following the implementation of landscape-scale FFT projects through the use of normalized difference vegetation index (NDVI) and standardized principle component analysis (PCA) from multi-temporal high resolution aerial imagery. The same FFT detection routine was applied on the Lidar data and aerial imagery for the purpose of comparing the capability of Lidar data and aerial imagery on FFT detection. Our results demonstrated that the FFT detection using Lidar derived CC products produced both the highest total accuracy and kappa coefficient, and was more robust at identifying areas with light FFTs. The accuracy using Lidar derived CHM products was significantly lower than that of the result using Lidar derived CC, but was still slightly higher than using aerial imagery. FFT detection results using NDVI and standardized PCA using multi-temporal aerial imagery produced almost identical total accuracy and kappa coefficient

  17. Precision measurements from very-large scale aerial digital imagery.

    PubMed

    Booth, D Terrance; Cox, Samuel E; Berryman, Robert D

    2006-01-01

    Managers need measurements and resource managers need the length/width of a variety of items including that of animals, logs, streams, plant canopies, man-made objects, riparian habitat, vegetation patches and other things important in resource monitoring and land inspection. These types of measurements can now be easily and accurately obtained from very large scale aerial (VLSA) imagery having spatial resolutions as fine as 1 millimeter per pixel by using the three new software programs described here. VLSA images have small fields of view and are used for intermittent sampling across extensive landscapes. Pixel-coverage among images is influenced by small changes in airplane altitude above ground level (AGL) and orientation relative to the ground, as well as by changes in topography. These factors affect the object-to-camera distance used for image-resolution calculations. 'ImageMeasurement' offers a user-friendly interface for accounting for pixel-coverage variation among images by utilizing a database. 'LaserLOG' records and displays airplane altitude AGL measured from a high frequency laser rangefinder, and displays the vertical velocity. 'Merge' sorts through large amounts of data generated by LaserLOG and matches precise airplane altitudes with camera trigger times for input to the ImageMeasurement database. We discuss application of these tools, including error estimates. We found measurements from aerial images (collection resolution: 5-26 mm/pixel as projected on the ground) using ImageMeasurement, LaserLOG, and Merge, were accurate to centimeters with an error less than 10%. We recommend these software packages as a means for expanding the utility of aerial image data.

  18. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  19. Estimation of walrus populations on sea ice with infrared imagery and aerial photography

    USGS Publications Warehouse

    Udevitz, M.S.; Burn, D.M.; Webber, M.A.

    2008-01-01

    Population sizes of ice-associated pinnipeds have often been estimated with visual or photographic aerial surveys, but these methods require relatively slow speeds and low altitudes, limiting the area they can cover. Recent developments in infrared imagery and its integration with digital photography could allow substantially larger areas to be surveyed and more accurate enumeration of individuals, thereby solving major problems with previous survey methods. We conducted a trial survey in April 2003 to estimate the number of Pacific walruses (Odobenus rosmarus divergens) hauled out on sea ice around St. Lawrence Island, Alaska. The survey used high altitude infrared imagery to detect groups of walruses on strip transects. Low altitude digital photography was used to determine the number of walruses in a sample of detected groups and calibrate the infrared imagery for estimating the total number of walruses. We propose a survey design incorporating this approach with satellite radio telemetry to estimate the proportion of the population in the water and additional low-level flights to estimate the proportion of the hauled-out population in groups too small to be detected in the infrared imagery. We believe that this approach offers the potential for obtaining reliable population estimates for walruses and other ice-associated pinnipeds. ?? 2007 by the Society for Marine Mammalogy.

  20. Building change detection via a combination of CNNs using only RGB aerial imageries

    NASA Astrophysics Data System (ADS)

    Nemoto, Keisuke; Hamaguchi, Ryuhei; Sato, Masakazu; Fujita, Aito; Imaizumi, Tomoyuki; Hikosaka, Shuhei

    2017-10-01

    Building change information extracted from remote sensing imageries is important for various applications such as urban management and marketing planning. The goal of this work is to develop a methodology for automatically capturing building changes from remote sensing imageries. Recent studies have addressed this goal by exploiting 3-D information as a proxy for building height. In contrast, because in practice it is expensive or impossible to prepare 3-D information, we do not rely on 3-D data but focus on using only RGB aerial imageries. Instead, we employ deep convolutional neural networks (CNNs) to extract effective features, and improve change detection accuracy in RGB remote sensing imageries. We consider two aspects of building change detection, building detection and subsequent change detection. Our proposed methodology was tested on several areas, which has some differences such as dominant building characteristics and varying brightness values. On all over the tested areas, the proposed method provides good results for changed objects, with recall values over 75 % with a strict overlap requirement of over 50% in intersection-over-union (IoU). When the IoU threshold was relaxed to over 10%, resulting recall values were over 81%. We conclude that use of CNNs enables accurate detection of building changes without employing 3-D information.

  1. Vernal Pools Detection Using High-Resolution LiDAR Data and Aerial Imagery in Hubbardston, Massachusetts

    NASA Astrophysics Data System (ADS)

    Jiang, Jiaxin

    Vernal pool refers to temporary or semi-permanent pools that occur in surface depressions without permanent inlets or outlets. Because they periodically dry out, vernal pools are free of fish and essential to amphibians, some reptiles, birds, and mammals for breeding habitats. In Massachusetts, vernal pool habitats are found in woodland depressions, swales or kettle holes where water is contained for at least two months in most years. However, vernal pools are delicate ecosystems. These systems are fragile to human activities such as urbanization. Understanding the current situation of vernal pools helps city planners make wiser decisions. This study focuses on identifying vernal pools in the state of Massachusetts with high-resolution light detection and ranging (LiDAR) data and aerial imagery. By using high-resolution light detection and ranging data, aerial imagery, land use data, the MassDEP Hydrography layer and the Soil Survey Geographic Database, the approach located over 1800 potential vernal pools in a 108 km 2 study area in Massachusetts. The assessment of the study result shows the commission rate was 5.6% and omission rate was 7.1%. This method provides an efficient way of locating vernal pools over large areas.

  2. Earth mapping - aerial or satellite imagery comparative analysis

    NASA Astrophysics Data System (ADS)

    Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo

    Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.

  3. Enhancement of spectral quality of archival aerial photographs using satellite imagery for detection of land cover

    NASA Astrophysics Data System (ADS)

    Siok, Katarzyna; Jenerowicz, Agnieszka; Woroszkiewicz, Małgorzata

    2017-07-01

    Archival aerial photographs are often the only reliable source of information about the area. However, these data are single-band data that do not allow unambiguous detection of particular forms of land cover. Thus, the authors of this article seek to develop a method of coloring panchromatic aerial photographs, which enable increasing the spectral information of such images. The study used data integration algorithms based on pansharpening, implemented in commonly used remote sensing programs: ERDAS, ENVI, and PCI. Aerial photos and Landsat multispectral data recorded in 1987 and 2016 were chosen. This study proposes the use of modified intensity-hue-saturation and Brovey methods. The use of these methods enabled the addition of red-green-blue (RGB) components to monochrome images, thus enhancing their interpretability and spectral quality. The limitations of the proposed method relate to the availability of RGB satellite imagery, the accuracy of mutual orientation of the aerial and the satellite data, and the imperfection of archival aerial photographs. Therefore, it should be expected that the results of coloring will not be perfect compared to the results of the fusion of recent data with a similar ground sampling resolution, but still, they will allow a more accurate and efficient classification of land cover registered on archival aerial photographs.

  4. Outlier and target detection in aerial hyperspectral imagery: a comparison of traditional and percentage occupancy hit or miss transform techniques

    NASA Astrophysics Data System (ADS)

    Young, Andrew; Marshall, Stephen; Gray, Alison

    2016-05-01

    The use of aerial hyperspectral imagery for the purpose of remote sensing is a rapidly growing research area. Currently, targets are generally detected by looking for distinct spectral features of the objects under surveillance. For example, a camouflaged vehicle, deliberately designed to blend into background trees and grass in the visible spectrum, can be revealed using spectral features in the near-infrared spectrum. This work aims to develop improved target detection methods, using a two-stage approach, firstly by development of a physics-based atmospheric correction algorithm to convert radiance into re ectance hyperspectral image data and secondly by use of improved outlier detection techniques. In this paper the use of the Percentage Occupancy Hit or Miss Transform is explored to provide an automated method for target detection in aerial hyperspectral imagery.

  5. Lake Superior water quality near Duluth from analysis of aerial photos and ERTS imagery

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.; Van Domelen, J. F.

    1973-01-01

    ERTS imagery of Lake Superior in the late summer of 1972 shows dirty water near the city of Duluth. Water samples and simultaneous photographs were taken on three separate days following a heavy storm which caused muddy runoff water. The water samples were analyzed for turbidity, color, and solids. Reflectance and transmittance characteristics of the water samples were determined with a spectrophotometer apparatus. This same apparatus attached to a microdensitometer was used to analyze the photographs for the approximate colors or wavelengths of reflected energy that caused the exposure. Although other parameters do correlate for any one particular day, it is only the water quality parameter of turbidity that correlates with the aerial imagery on all days, as the character of the dirty water changes due to settling and mixing.

  6. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery

    PubMed Central

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-01-01

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564

  7. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.

    PubMed

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-03-26

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.

  8. Tracking stormwater discharge plumes and water quality of the Tijuana River with multispectral aerial imagery

    NASA Astrophysics Data System (ADS)

    Svejkovsky, Jan; Nezlin, Nikolay P.; Mustain, Neomi M.; Kum, Jamie B.

    2010-04-01

    Spatial-temporal characteristics and environmental factors regulating the behavior of stormwater runoff from the Tijuana River in southern California were analyzed utilizing very high resolution aerial imagery, and time-coincident environmental and bacterial sampling data. Thirty nine multispectral aerial images with 2.1-m spatial resolution were collected after major rainstorms during 2003-2008. Utilizing differences in color reflectance characteristics, the ocean surface was classified into non-plume waters and three components of the runoff plume reflecting differences in age and suspended sediment concentrations. Tijuana River discharge rate was the primary factor regulating the size of the freshest plume component and its shorelong extensions to the north and south. Wave direction was found to affect the shorelong distribution of the shoreline-connected fresh plume components much more strongly than wind direction. Wave-driven sediment resuspension also significantly contributed to the size of the oldest plume component. Surf zone bacterial samples collected near the time of each image acquisition were used to evaluate the contamination characteristics of each plume component. The bacterial contamination of the freshest plume waters was very high (100% of surf zone samples exceeded California standards), but the oldest plume areas were heterogeneous, including both polluted and clean waters. The aerial imagery archive allowed study of river runoff characteristics on a plume component level, not previously done with coarser satellite images. Our findings suggest that high resolution imaging can quickly identify the spatial extents of the most polluted runoff but cannot be relied upon to always identify the entire polluted area. Our results also indicate that wave-driven transport is important in distributing the most contaminated plume areas along the shoreline.

  9. Quantifying the rapid evolution of a nourishment project with video imagery

    USGS Publications Warehouse

    Elko, N.A.; Holman, R.A.; Gelfenbaum, G.

    2005-01-01

    Spatially and temporally high-resolution video imagery was combined with traditional surveyed beach profiles to investigate the evolution of a rapidly eroding beach nourishment project. Upham Beach is a 0.6-km beach located downdrift of a structured inlet on the west coast of Florida. The beach was stabilized in seaward advanced position during the 1960s and has been nourished every 4-5 years since 1975. During the 1996 nourishment project, 193,000 m 3 of sediment advanced the shoreline as much as 175 m. Video images were collected concurrent with traditional surveys during the 1996 nourishment project to test video imaging as a nourishment monitoring technique. Video imagery illustrated morphologic changes that were unapparent in survey data. Increased storminess during the second (El Nin??o) winter after the 1996 project resulted in increased erosion rates of 0.4 m/d (135.0 m/y) as compared with 0.2 m/d (69.4 m/y) during the first winter. The measured half-life, the time at which 50% of the nourished material remains, of the nourishment project was 0.94 years. A simple analytical equation indicates reasonable agreement with the measured values, suggesting that project evolution follows a predictable pattern of exponential decay. Long-shore planform equilibration does not occur on Upham Beach, rather sediment diffuses downdrift until 100% of the nourished material erodes. The wide nourished beach erodes rapidly due to the lack of sediment bypassing from the north and the stabilized headland at Upham Beach that is exposed to wave energy.

  10. Application of Unmanned Aerial Systems in Spatial Downscaling of Landsat VIR imageries of Agricultural Fields

    NASA Astrophysics Data System (ADS)

    Torres, A.; Hassan Esfahani, L.; Ebtehaj, A.; McKee, M.

    2016-12-01

    While coarse space-time resolution of satellite observations in visible to near infrared (VIR) is a serious limiting factor for applications in precision agriculture, high resolution remotes sensing observation by the Unmanned Aerial Systems (UAS) systems are also site-specific and still practically restrictive for widespread applications in precision agriculture. We present a modern spatial downscaling approach that relies on new sparse approximation techniques. The downscaling approach learns from a large set of coincident low- and high-resolution satellite and UAS observations to effectively downscale the satellite imageries in VIR bands. We focus on field experiments using the AggieAirTM platform and Landsat 7 ETM+ and Landsat 8 OLI observations obtained in an intensive field campaign in 2013 over an agriculture field in Scipio, Utah. The results show that the downscaling methods can effectively increase the resolution of Landsat VIR imageries by the order of 2 to 4 from 30 m to 15 and 7.5 m, respectively. Specifically, on average, the downscaling method reduces the root mean squared errors up to 26%, considering bias corrected AggieAir imageries as the reference.

  11. Spatially explicit rangeland erosion monitoring using high-resolution digital aerial imagery

    USGS Publications Warehouse

    Gillan, Jeffrey K.; Karl, Jason W.; Barger, Nichole N.; Elaksher, Ahmed; Duniway, Michael C.

    2016-01-01

    Nearly all of the ecosystem services supported by rangelands, including production of livestock forage, carbon sequestration, and provisioning of clean water, are negatively impacted by soil erosion. Accordingly, monitoring the severity, spatial extent, and rate of soil erosion is essential for long-term sustainable management. Traditional field-based methods of monitoring erosion (sediment traps, erosion pins, and bridges) can be labor intensive and therefore are generally limited in spatial intensity and/or extent. There is a growing effort to monitor natural resources at broad scales, which is driving the need for new soil erosion monitoring tools. One remote-sensing technique that can be used to monitor soil movement is a time series of digital elevation models (DEMs) created using aerial photogrammetry methods. By geographically coregistering the DEMs and subtracting one surface from the other, an estimate of soil elevation change can be created. Such analysis enables spatially explicit quantification and visualization of net soil movement including erosion, deposition, and redistribution. We constructed DEMs (12-cm ground sampling distance) on the basis of aerial photography immediately before and 1 year after a vegetation removal treatment on a 31-ha Piñon-Juniper woodland in southeastern Utah to evaluate the use of aerial photography in detecting soil surface change. On average, we were able to detect surface elevation change of ± 8−9cm and greater, which was sufficient for the large amount of soil movement exhibited on the study area. Detecting more subtle soil erosion could be achieved using the same technique with higher-resolution imagery from lower-flying aircraft such as unmanned aerial vehicles. DEM differencing and process-focused field methods provided complementary information and a more complete assessment of soil loss and movement than any single technique alone. Photogrammetric DEM differencing could be used as a technique to

  12. Automatic extraction of tree crowns from aerial imagery in urban environment

    NASA Astrophysics Data System (ADS)

    Liu, Jiahang; Li, Deren; Qin, Xunwen; Yang, Jianfeng

    2006-10-01

    Traditionally, field-based investigation is the main method to investigate greenbelt in urban environment, which is costly and low updating frequency. In higher resolution image, the imagery structure and texture of tree canopy has great similarity in statistics despite the great difference in configurations of tree canopy, and their surface structures and textures of tree crown are very different from the other types. In this paper, we present an automatic method to detect tree crowns using high resolution image in urban environment without any apriori knowledge. Our method catches unique structure and texture of tree crown surface, use variance and mathematical expectation of defined image window to position the candidate canopy blocks coarsely, then analysis their inner structure and texture to refine these candidate blocks. The possible spans of all the feature parameters used in our method automatically generate from the small number of samples, and HOLE and its distribution as an important characteristics are introduced into refining processing. Also the isotropy of candidate image block and holes' distribution is integrated in our method. After introduction the theory of our method, aerial imageries were used ( with a resolution about 0.3m ) to test our method, and the results indicate that our method is an effective approach to automatically detect tree crown in urban environment.

  13. Multiple vehicle tracking in aerial video sequence using driver behavior analysis and improved deterministic data association

    NASA Astrophysics Data System (ADS)

    Zhang, Xunxun; Xu, Hongke; Fang, Jianwu

    2018-01-01

    Along with the rapid development of the unmanned aerial vehicle technology, multiple vehicle tracking (MVT) in aerial video sequence has received widespread interest for providing the required traffic information. Due to the camera motion and complex background, MVT in aerial video sequence poses unique challenges. We propose an efficient MVT algorithm via driver behavior-based Kalman filter (DBKF) and an improved deterministic data association (IDDA) method. First, a hierarchical image registration method is put forward to compensate the camera motion. Afterward, to improve the accuracy of the state estimation, we propose the DBKF module by incorporating the driver behavior into the Kalman filter, where artificial potential field is introduced to reflect the driver behavior. Then, to implement the data association, a local optimization method is designed instead of global optimization. By introducing the adaptive operating strategy, the proposed IDDA method can also deal with the situation in which the vehicles suddenly appear or disappear. Finally, comprehensive experiments on the DARPA VIVID data set and KIT AIS data set demonstrate that the proposed algorithm can generate satisfactory and superior results.

  14. A bio-inspired system for spatio-temporal recognition in static and video imagery

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas

    2007-04-01

    This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.

  15. New perspectives on archaeological prospecting: Multispectral imagery analysis from Army City, Kansas, USA

    NASA Astrophysics Data System (ADS)

    Banks, Benjamin Daniel

    Aerial imagery analysis has a long history in European archaeology and despite early attempts little progress has been made to promote its use in North America. Recent advances in multispectral satellite and aerial sensors are helping to make aerial imagery analysis more effective in North America, and more cost effective. A site in northeastern Kansas is explored using multispectral aerial and satellite imagery allowing buried features to be mapped. Many of the problems associated with early aerial imagery analysis are explored, such as knowledge of archeological processes that contribute to crop mark formation. Use of multispectral imagery provides a means of detecting and enhancing crop marks not easily distinguishable in visible spectrum imagery. Unsupervised computer classifications of potential archaeological features permits their identification and interpretation while supervised classifications, incorporating limited amounts of geophysical data, provide a more detailed understanding of the site. Supervised classifications allow archaeological processes contributing to crop mark formation to be explored. Aerial imagery analysis is argued to be useful to a wide range of archeological problems, reducing person hours and expenses needed for site delineation and mapping. This technology may be especially useful for cultural resources management.

  16. Analysis of the impact of spatial resolution on land/water classifications using high-resolution aerial imagery

    USGS Publications Warehouse

    Enwright, Nicholas M.; Jones, William R.; Garber, Adrienne L.; Keller, Matthew J.

    2014-01-01

    Long-term monitoring efforts often use remote sensing to track trends in habitat or landscape conditions over time. To most appropriately compare observations over time, long-term monitoring efforts strive for consistency in methods. Thus, advances and changes in technology over time can present a challenge. For instance, modern camera technology has led to an increasing availability of very high-resolution imagery (i.e. submetre and metre) and a shift from analogue to digital photography. While numerous studies have shown that image resolution can impact the accuracy of classifications, most of these studies have focused on the impacts of comparing spatial resolution changes greater than 2 m. Thus, a knowledge gap exists on the impacts of minor changes in spatial resolution (i.e. submetre to about 1.5 m) in very high-resolution aerial imagery (i.e. 2 m resolution or less). This study compared the impact of spatial resolution on land/water classifications of an area dominated by coastal marsh vegetation in Louisiana, USA, using 1:12,000 scale colour-infrared analogue aerial photography (AAP) scanned at four different dot-per-inch resolutions simulating ground sample distances (GSDs) of 0.33, 0.54, 1, and 2 m. Analysis of the impact of spatial resolution on land/water classifications was conducted by exploring various spatial aspects of the classifications including density of waterbodies and frequency distributions in waterbody sizes. This study found that a small-magnitude change (1–1.5 m) in spatial resolution had little to no impact on the amount of water classified (i.e. percentage mapped was less than 1.5%), but had a significant impact on the mapping of very small waterbodies (i.e. waterbodies ≤ 250 m2). These findings should interest those using temporal image classifications derived from very high-resolution aerial photography as a component of long-term monitoring programs.

  17. The Effects of Mental Imagery with Video-Modeling on Self-Efficacy and Maximal Front Squat Ability

    PubMed Central

    Buck, Daniel J. M.; Hutchinson, Jasmin C.; Winter, Christa R.; Thompson, Brian A.

    2016-01-01

    This study was designed to assess the effectiveness of mental imagery supplemented with video-modeling on self-efficacy and front squat strength (three repetition maximum; 3RM). Subjects (13 male, 7 female) who had at least 6 months of front squat experience were assigned to either an experimental (n = 10) or a control (n = 10) group. Subjects′ 3RM and self-efficacy for the 3RM were measured at baseline. Following this, subjects in the experimental group followed a structured imagery protocol, incorporating video recordings of both their own 3RM performance and a model lifter with excellent technique, twice a day for three days. Subjects in the control group spent the same amount of time viewing a placebo video. Following three days with no physical training, measurements of front squat 3RM and self-efficacy for the 3RM were repeated. Subjects in the experimental group increased in self-efficacy following the intervention, and showed greater 3RM improvement than those in the control group. Self-efficacy was found to significantly mediate the relationship between imagery and front squat 3RM. These findings point to the importance of mental skills training for the enhancement of self-efficacy and front squat performance.

  18. The Effects of Mental Imagery with Video-Modeling on Self-Efficacy and Maximal Front Squat Ability.

    PubMed

    Buck, Daniel J M; Hutchinson, Jasmin C; Winter, Christa R; Thompson, Brian A

    2016-04-14

    This study was designed to assess the effectiveness of mental imagery supplemented with video-modeling on self-efficacy and front squat strength (three repetition maximum; 3RM). Subjects (13 male, 7 female) who had at least 6 months of front squat experience were assigned to either an experimental ( n = 10) or a control ( n = 10) group. Subjects' 3RM and self-efficacy for the 3RM were measured at baseline. Following this, subjects in the experimental group followed a structured imagery protocol, incorporating video recordings of both their own 3RM performance and a model lifter with excellent technique, twice a day for three days. Subjects in the control group spent the same amount of time viewing a placebo video. Following three days with no physical training, measurements of front squat 3RM and self-efficacy for the 3RM were repeated. Subjects in the experimental group increased in self-efficacy following the intervention, and showed greater 3RM improvement than those in the control group. Self-efficacy was found to significantly mediate the relationship between imagery and front squat 3RM. These findings point to the importance of mental skills training for the enhancement of self-efficacy and front squat performance.

  19. A semi-automated approach to derive elevation time-series and calculate glacier mass balance from historical aerial imagery

    NASA Astrophysics Data System (ADS)

    Whorton, E.; Headman, A.; Shean, D. E.; McCann, E.

    2017-12-01

    Understanding the implications of glacier recession on water resources in the western U.S. requires quantifying glacier mass change across large regions over several decades. Very few glaciers in North America have long-term continuous field measurements of glacier mass balance. However, systematic aerial photography campaigns began in 1957 on many glaciers in the western U.S. and Alaska. These historical, vertical aerial stereo-photographs documenting glacier evolution have recently become publically available. Digital elevation models (DEM) of the transient glacier surface preserved in each imagery timestamp can be derived, then differenced to calculate glacier volume and mass change to improve regional geodetic solutions of glacier mass balance. In order to batch process these data, we use Python-based algorithms and Agisoft Photoscan structure from motion (SfM) photogrammetry software to semi-automate DEM creation, and orthorectify and co-register historical aerial imagery in a high-performance computing environment. Scanned photographs are rotated to reduce scaling issues, cropped to the same size to remove fiducials, and batch histogram equalization is applied to improve image quality and aid pixel-matching algorithms using the Python library OpenCV. Processed photographs are then passed to Photoscan through the Photoscan Python library to create DEMs and orthoimagery. To extend the period of record, the elevation products are co-registered to each other, airborne LiDAR data, and DEMs derived from sub-meter commercial satellite imagery. With the exception of the placement of ground control points, the process is entirely automated with Python. Current research is focused on: one, applying these algorithms to create geodetic mass balance time series for the 90 photographed glaciers in Washington State and two, evaluating the minimal amount of positional information required in Photoscan to prevent distortion effects that cannot be addressed during co

  20. Mapping coastal marine debris using aerial imagery and spatial analysis.

    PubMed

    Moy, Kirsten; Neilson, Brian; Chung, Anne; Meadows, Amber; Castrence, Miguel; Ambagis, Stephen; Davidson, Kristine

    2017-12-19

    This study is the first to systematically quantify, categorize, and map marine macro-debris across the main Hawaiian Islands (MHI), including remote areas (e.g., Niihau, Kahoolawe, and northern Molokai). Aerial surveys were conducted over each island to collect high resolution photos, which were processed into orthorectified imagery and visually analyzed in GIS. The technique provided precise measurements of the quantity, location, type, and size of macro-debris (>0.05m 2 ), identifying 20,658 total debris items. Northeastern (windward) shorelines had the highest density of debris. Plastics, including nets, lines, buoys, floats, and foam, comprised 83% of the total count. In addition, the study located six vessels from the 2011 Tōhoku tsunami. These results created a baseline of the location, distribution, and composition of marine macro-debris across the MHI. Resource managers and communities may target high priority areas, particularly along remote coastlines where macro-debris counts were largely undocumented. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. EROS main image file - A picture perfect database for Landsat imagery and aerial photography

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    The Earth Resources Observation System (EROS) Program was established by the U.S. Department of the Interior in 1966 under the administration of the Geological Survey. It is primarily concerned with the application of remote sensing techniques for the management of natural resources. The retrieval system employed to search the EROS database is called INORAC (Inquiry, Ordering, and Accounting). A description is given of the types of images identified in EROS, taking into account Landsat imagery, Skylab images, Gemini/Apollo photography, and NASA aerial photography. Attention is given to retrieval commands, geographic coordinate searching, refinement techniques, various online functions, and questions regarding the access to the EROS Main Image File.

  2. Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Gao, Junfeng; Liao, Wenzhi; Nuyttens, David; Lootens, Peter; Vangeyte, Jürgen; Pižurica, Aleksandra; He, Yong; Pieters, Jan G.

    2018-05-01

    The developments in the use of unmanned aerial vehicles (UAVs) and advanced imaging sensors provide new opportunities for ultra-high resolution (e.g., less than a 10 cm ground sampling distance (GSD)) crop field monitoring and mapping in precision agriculture applications. In this study, we developed a strategy for inter- and intra-row weed detection in early season maize fields from aerial visual imagery. More specifically, the Hough transform algorithm (HT) was applied to the orthomosaicked images for inter-row weed detection. A semi-automatic Object-Based Image Analysis (OBIA) procedure was developed with Random Forests (RF) combined with feature selection techniques to classify soil, weeds and maize. Furthermore, the two binary weed masks generated from HT and OBIA were fused for accurate binary weed image. The developed RF classifier was evaluated by 5-fold cross validation, and it obtained an overall accuracy of 0.945, and Kappa value of 0.912. Finally, the relationship of detected weeds and their ground truth densities was quantified by a fitted linear model with a coefficient of determination of 0.895 and a root mean square error of 0.026. Besides, the importance of input features was evaluated, and it was found that the ratio of vegetation length and width was the most significant feature for the classification model. Overall, our approach can yield a satisfactory weed map, and we expect that the obtained accurate and timely weed map from UAV imagery will be applicable to realize site-specific weed management (SSWM) in early season crop fields for reducing spraying non-selective herbicides and costs.

  3. Small Moving Vehicle Detection in a Satellite Video of an Urban Area

    PubMed Central

    Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng

    2016-01-01

    Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091

  4. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  5. Mosaicking Techniques for Deep Submergence Vehicle Video Imagery - Applications to Ridge2000 Science

    NASA Astrophysics Data System (ADS)

    Mayer, L.; Rzhanov, Y.; Fornari, D. J.; Soule, A.; Shank, T. M.; Beaulieu, S. E.; Schouten, H.; Tivey, M.

    2004-12-01

    Severe attenuation of visible light and limited power capabilities of many submersible vehicles require acquisition of imagery from short ranges, rarely exceeding 8-10 meters. Although modern video- and photo-equipment makes high-resolution video surveying possible, the field of view of each image remains relatively narrow. To compensate for the deficiencies in light and field of view researchers have been developing techniques allowing for combining images into larger composite images i.e., mosaicking. A properly constructed, accurate mosaic has a number of well-known advantages in comparison with the original sequence of images, the most notable being improved situational awareness. We have developed software strategies for PC-based computers that permit conversion of video imagery acquired from any underwater vehicle, operated within both absolute (e.g. LBL or USBL) or relative (e.g. Doppler Velocity Log-DVL) navigation networks, to quickly produce a set of geo-referenced photomosaics which can then be directly incorporated into a Geographic Information System (GIS) data base. The timescale of processing is rapid enough to permit analysis of the resulting mosaics between submersible dives thus enhancing the efficiency of deep-sea research. Commercial imaging processing packages usually handle cases where there is no or little parallax - an unlikely situation for undersea world where terrain has pronounced 3D content and imagery is acquired from moving platforms. The approach we have taken is optimized for situations in which there is significant relief and thus parallax in the imagery (e.g. seafloor fault scarps or constructional volcanic escarpments and flow fronts). The basis of all mosaicking techniques is a pair-wise image registration method that finds a transformation relating pixels of two consecutive image frames. We utilize a "rigid affine model" with four degrees of freedom for image registration that allows for camera translation in all directions and

  6. Trafficking in tobacco farm culture: Tobacco companies use of video imagery to undermine health policy

    PubMed Central

    Otañez, Martin G; Glantz, Stanton A

    2009-01-01

    The cigarette companies and their lobbying organization used tobacco industry-produced films and videos about tobacco farming to support their political, public relations, and public policy goals. Critical discourse analysis shows how tobacco companies utilized film and video imagery and narratives of tobacco farmers and tobacco economies for lobbying politicians and influencing consumers, industry-allied groups, and retail shop owners to oppose tobacco control measures and counter publicity on the health hazards, social problems, and environmental effects of tobacco growing. Imagery and narratives of tobacco farmers, tobacco barns, and agricultural landscapes in industry videos constituted a tobacco industry strategy to construct a corporate vision of tobacco farm culture that privileges the economic benefits of tobacco. The positive discursive representations of tobacco farming ignored actual behavior of tobacco companies to promote relationships of dependency and subordination for tobacco farmers and to contribute to tobacco-related poverty, child labor, and deforestation in tobacco growing countries. While showing tobacco farming as a family and a national tradition and a source of jobs, tobacco companies portrayed tobacco as a tradition to be protected instead of an industry to be regulated and denormalized. PMID:20160936

  7. Monitoring the Invasion of Spartina alterniflora Using Very High Resolution Unmanned Aerial Vehicle Imagery in Beihai, Guangxi (China)

    PubMed Central

    Wan, Huawei; Wang, Qiao; Jiang, Dong; Yang, Yipeng; Liu, Xiaoman

    2014-01-01

    Spartina alterniflora was introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread of Spartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area of Spartina alterniflora was 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection of Spartina alterniflora invasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population. PMID:24892066

  8. Monitoring the invasion of Spartina alterniflora using very high resolution unmanned aerial vehicle imagery in Beihai, Guangxi (China).

    PubMed

    Wan, Huawei; Wang, Qiao; Jiang, Dong; Fu, Jingying; Yang, Yipeng; Liu, Xiaoman

    2014-01-01

    Spartina alterniflora was introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread of Spartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area of Spartina alterniflora was 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection of Spartina alterniflora invasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population.

  9. Combining Human Computing and Machine Learning to Make Sense of Big (Aerial) Data for Disaster Response.

    PubMed

    Ofli, Ferda; Meier, Patrick; Imran, Muhammad; Castillo, Carlos; Tuia, Devis; Rey, Nicolas; Briant, Julien; Millet, Pauline; Reinhard, Friedrich; Parkan, Matthew; Joost, Stéphane

    2016-03-01

    Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The

  10. Evaluate ERTS imagery for mapping and detection of changes of snowcover on land and on glaciers

    NASA Technical Reports Server (NTRS)

    Meier, M. F. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The area of snow cover on land was determined from ERTS-1 imagery. Snow cover in specific drainage basins was measured with the Stanford Research Institute console by electronically superimposing basin outlines on imagery, with video density slicing to measure areas. Snow covered area and snowline altitudes were also determined by enlarging ERTS-1 imagery 1:250,000 and using a transparent map overlay. Under very favorable conditions, snowline altitude was determined to an accuracy of about 60 m. Ability to map snow cover or to determine snowline altitude depends primarily on cloud cover and vegetation and secondarily on slope, terrain roughness, sun angle, radiometric fidelity, and amount of spectral information available. Glacier accumulation area ratios were determined from ERTS-1 imagery. Also, subtle flow structures, undetected on aerial photographs, were visible. Surging glaciers were identified, and the changes resulting from the surge of a large glacier were measured as were changes in tidal glacier termini.

  11. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    USDA-ARS?s Scientific Manuscript database

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  12. Super-Resolution for "Jilin-1" Satellite Video Imagery via a Convolutional Network.

    PubMed

    Xiao, Aoran; Wang, Zhongyuan; Wang, Lei; Ren, Yexian

    2018-04-13

    Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method's practicality. Experimental results on "Jilin-1" satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods.

  13. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  14. Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models.

    PubMed

    AlDahoul, Nouar; Md Sabri, Aznul Qalid; Mansoor, Ali Mohammed

    2018-01-01

    Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).

  15. Environmental waste site characterization utilizing aerial photographs and satellite imagery: Three sites in New Mexico, USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Eeckhout, E.; Pope, P.; Becker, N.

    1996-04-01

    The proper handling and characterization of past hazardous waste sites is becoming more and more important as world population extends into areas previously deemed undesirable. Historical photographs, past records, current aerial satellite imagery can play an important role in characterizing these sites. These data provide clear insight into defining problem areas which can be surface samples for further detail. Three such areas are discussed in this paper: (1) nuclear wastes buried in trenches at Los Alamos National Laboratory, (2) surface dumping at one site at Los Alamos National Laboratory, and (3) the historical development of a municipal landfill near Lasmore » Cruces, New Mexico.« less

  16. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  17. Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.

    2012-01-01

    As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.

  18. Intergration of LiDAR Data with Aerial Imagery for Estimating Rooftop Solar Photovoltaic Potentials in City of Cape Town

    NASA Astrophysics Data System (ADS)

    Adeleke, A. K.; Smit, J. L.

    2016-06-01

    Apart from the drive to reduce carbon dioxide emissions by carbon-intensive economies like South Africa, the recent spate of electricity load shedding across most part of the country, including Cape Town has left electricity consumers scampering for alternatives, so as to rely less on the national grid. Solar energy, which is adequately available in most part of Africa and regarded as a clean and renewable source of energy, makes it possible to generate electricity by using photovoltaics technology. However, before time and financial resources are invested into rooftop solar photovoltaic systems in urban areas, it is important to evaluate the potential of the building rooftop, intended to be used in harvesting the solar energy. This paper presents methodologies making use of LiDAR data and other ancillary data, such as high-resolution aerial imagery, to automatically extract building rooftops in City of Cape Town and evaluate their potentials for solar photovoltaics systems. Two main processes were involved: (1) automatic extraction of building roofs using the integration of LiDAR data and aerial imagery in order to derive its' outline and areal coverage; and (2) estimating the global solar radiation incidence on each roof surface using an elevation model derived from the LiDAR data, in order to evaluate its solar photovoltaic potential. This resulted in a geodatabase, which can be queried to retrieve salient information about the viability of a particular building roof for solar photovoltaic installation.

  19. Fusing Unmanned Aerial Vehicle Imagery with High Resolution Hydrologic Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Pierini, N.; Schreiner-McGraw, A.; Anderson, C.; Saripalli, S.; Rango, A.

    2013-12-01

    After decades of development and applications, high resolution hydrologic models are now common tools in research and increasingly used in practice. More recently, high resolution imagery from unmanned aerial vehicles (UAVs) that provide information on land surface properties have become available for civilian applications. Fusing the two approaches promises to significantly advance the state-of-the-art in terms of hydrologic modeling capabilities. This combination will also challenge assumptions on model processes, parameterizations and scale as land surface characteristics (~0.1 to 1 m) may now surpass traditional model resolutions (~10 to 100 m). Ultimately, predictions from high resolution hydrologic models need to be consistent with the observational data that can be collected from UAVs. This talk will describe our efforts to develop, utilize and test the impact of UAV-derived topographic and vegetation fields on the simulation of two small watersheds in the Sonoran and Chihuahuan Deserts at the Santa Rita Experimental Range (Green Valley, AZ) and the Jornada Experimental Range (Las Cruces, NM). High resolution digital terrain models, image orthomosaics and vegetation species classification were obtained from a fixed wing airplane and a rotary wing helicopter, and compared to coarser analyses and products, including Light Detection and Ranging (LiDAR). We focus the discussion on the relative improvements achieved with UAV-derived fields in terms of terrain-hydrologic-vegetation analyses and summer season simulations using the TIN-based Real-time Integrated Basin Simulator (tRIBS) model. Model simulations are evaluated at each site with respect to a high-resolution sensor network consisting of six rain gauges, forty soil moisture and temperature profiles, four channel runoff flumes, a cosmic-ray soil moisture sensor and an eddy covariance tower over multiple summer periods. We also discuss prospects for the fusion of high resolution models with novel

  20. Surface Temperature Mapping of the University of Northern Iowa Campus Using High Resolution Thermal Infrared Aerial Imageries

    PubMed Central

    Savelyev, Alexander; Sugumaran, Ramanathan

    2008-01-01

    The goal of this project was to map the surface temperature of the University of Northern Iowa campus using high-resolution thermal infrared aerial imageries. A thermal camera with a spectral bandwidth of 3.0-5.0 μm was flown at the average altitude of 600 m, achieving ground resolution of 29 cm. Ground control data was used to construct the pixel- to-temperature conversion model, which was later used to produce temperature maps of the entire campus and also for validation of the model. The temperature map then was used to assess the building rooftop conditions and steam line faults in the study area. Assessment of the temperature map revealed a number of building structures that may be subject to insulation improvement due to their high surface temperatures leaks. Several hot spots were also identified on the campus for steam pipelines faults. High-resolution thermal infrared imagery proved highly effective tool for precise heat anomaly detection on the campus, and it can be used by university facility services for effective future maintenance of buildings and grounds. PMID:27873800

  1. Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland

    NASA Astrophysics Data System (ADS)

    Lu, Bing; He, Yuhong

    2017-06-01

    Investigating spatio-temporal variations of species composition in grassland is an essential step in evaluating grassland health conditions, understanding the evolutionary processes of the local ecosystem, and developing grassland management strategies. Space-borne remote sensing images (e.g., MODIS, Landsat, and Quickbird) with spatial resolutions varying from less than 1 m to 500 m have been widely applied for vegetation species classification at spatial scales from community to regional levels. However, the spatial resolutions of these images are not fine enough to investigate grassland species composition, since grass species are generally small in size and highly mixed, and vegetation cover is greatly heterogeneous. Unmanned Aerial Vehicle (UAV) as an emerging remote sensing platform offers a unique ability to acquire imagery at very high spatial resolution (centimetres). Compared to satellites or airplanes, UAVs can be deployed quickly and repeatedly, and are less limited by weather conditions, facilitating advantageous temporal studies. In this study, we utilize an octocopter, on which we mounted a modified digital camera (with near-infrared (NIR), green, and blue bands), to investigate species composition in a tall grassland in Ontario, Canada. Seven flight missions were conducted during the growing season (April to December) in 2015 to detect seasonal variations, and four of them were selected in this study to investigate the spatio-temporal variations of species composition. To quantitatively compare images acquired at different times, we establish a processing flow of UAV-acquired imagery, focusing on imagery quality evaluation and radiometric correction. The corrected imagery is then applied to an object-based species classification. Maps of species distribution are subsequently used for a spatio-temporal change analysis. Results indicate that UAV-acquired imagery is an incomparable data source for studying fine-scale grassland species composition

  2. New interpretations of the Fort Clark State Historic Site based on aerial color and thermal infrared imagery

    NASA Astrophysics Data System (ADS)

    Heller, Andrew Roland

    The Fort Clark State Historic Site (32ME2) is a well known site on the upper Missouri River, North Dakota. The site was the location of two Euroamerican trading posts and a large Mandan-Arikara earthlodge village. In 2004, Dr. Kenneth L. Kvamme and Dr. Tommy Hailey surveyed the site using aerial color and thermal infrared imagery collected from a powered parachute. Individual images were stitched together into large image mosaics and registered to Wood's 1993 interpretive map of the site using Adobe Photoshop. The analysis of those image mosaics resulted in the identification of more than 1,500 archaeological features, including as many as 124 earthlodges.

  3. Semantic Segmentation and Difference Extraction via Time Series Aerial Video Camera and its Application

    NASA Astrophysics Data System (ADS)

    Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.

    2015-04-01

    Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.

  4. Converting aerial imagery to application maps

    USDA-ARS?s Scientific Manuscript database

    Over the last couple of years in Agricultural Aviation and at the 2014 and 2015 NAAA conventions, we have written about and presented both single-camera and two-camera imaging systems for use on agricultural aircraft. Many aerial applicators have shown a great deal of interest in the imaging systems...

  5. Moving object detection in top-view aerial videos improved by image stacking

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  6. Identification of wild areas in southern lower Michigan. [terrain analysis from aerial photography, and satellite imagery

    NASA Technical Reports Server (NTRS)

    Habowski, S.; Cialek, C.

    1978-01-01

    An inventory methodology was developed to identify potential wild area sites. A list of site criteria were formulated and tested in six selected counties. Potential sites were initially identified from LANDSAT satellite imagery. A detailed study of the soil, vegetation and relief characteristics of each site based on both high-altitude aerial photographs and existing map data was conducted to eliminate unsuitable sites. Ground reconnaissance of the remaining wild areas was made to verify suitability and acquire information on wildlife and general aesthetics. Physical characteristics of the wild areas in each county are presented in tables. Maps show the potential sites to be set aside for natural preservation and regulation by the state under the Wilderness and Natural Areas Act of 1972.

  7. Super-Resolution for “Jilin-1” Satellite Video Imagery via a Convolutional Network

    PubMed Central

    Wang, Zhongyuan; Wang, Lei; Ren, Yexian

    2018-01-01

    Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method’s practicality. Experimental results on “Jilin-1” satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods. PMID:29652838

  8. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California

    PubMed Central

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R.

    2017-01-01

    Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. PMID:28241028

  9. Aerial-Photointerpretation of landslides along the Ohio and Mississippi rivers

    USGS Publications Warehouse

    Su, W.-J.; Stohr, C.

    2000-01-01

    A landslide inventory was conducted along the Ohio and Mississippi rivers in the New Madrid Seismic Zone of southern Illinois, between the towns of Olmsted and Chester, Illinois. Aerial photography and field reconnaissance identified 221 landslides of three types: rock/debris falls, block slides, and undifferentiated rotational/translational slides. Most of the landslides are small- to medium-size, ancient rotational/translational features partially ob-scured by vegetation and modified by weathering. Five imagery sources were interpreted for landslides: 1:250,000-scale side-looking airborne radar (SLAR); 1:40,000-scale, 1:20,000-scale, 1:6,000-scale, black and white aerial photography; and low altitude, oblique 35-mm color photography. Landslides were identified with three levels of confidence on the basis of distinguishing characteristics and ambiguous indicators. SLAR imagery permitted identification of a 520 hectare mega-landslide which would not have been identified on medium-scale aerial photography. The leaf-off, 35-mm color, oblique photography provided the best imagery for confident interpretation of detailed features needed for smaller landslides.

  10. Automatic Classification of Aerial Imagery for Urban Hydrological Applications

    NASA Astrophysics Data System (ADS)

    Paul, A.; Yang, C.; Breitkopf, U.; Liu, Y.; Wang, Z.; Rottensteiner, F.; Wallner, M.; Verworn, A.; Heipke, C.

    2018-04-01

    In this paper we investigate the potential of automatic supervised classification for urban hydrological applications. In particular, we contribute to runoff simulations using hydrodynamic urban drainage models. In order to assess whether the capacity of the sewers is sufficient to avoid surcharge within certain return periods, precipitation is transformed into runoff. The transformation of precipitation into runoff requires knowledge about the proportion of drainage-effective areas and their spatial distribution in the catchment area. Common simulation methods use the coefficient of imperviousness as an important parameter to estimate the overland flow, which subsequently contributes to the pipe flow. The coefficient of imperviousness is the percentage of area covered by impervious surfaces such as roofs or road surfaces. It is still common practice to assign the coefficient of imperviousness for each particular land parcel manually by visual interpretation of aerial images. Based on classification results of these imagery we contribute to an objective automatic determination of the coefficient of imperviousness. In this context we compare two classification techniques: Random Forests (RF) and Conditional Random Fields (CRF). Experimental results performed on an urban test area show good results and confirm that the automated derivation of the coefficient of imperviousness, apart from being more objective and, thus, reproducible, delivers more accurate results than the interactive estimation. We achieve an overall accuracy of about 85 % for both classifiers. The root mean square error of the differences of the coefficient of imperviousness compared to the reference is 4.4 % for the CRF-based classification, and 3.8 % for the RF-based classification.

  11. Operator selection for unmanned aerial systems: comparing video game players and pilots.

    PubMed

    McKinley, R Andy; McIntire, Lindsey K; Funke, Margaret A

    2011-06-01

    Popular unmanned aerial system (UAS) platforms such as the MQ-1 Predator and MQ-9 Reaper have experienced accelerated operations tempos that have outpaced current operator training regimens, leading to a shortage of qualified UAS operators. To find a surrogate to replace pilots of manned aircraft as UAS operators, this study evaluated video game players (VGPs), pilots, and a control group on a set of UAS operation relevant cognitive tasks. There were 30 participants who volunteered for this study and were divided into 3 groups: experienced pilots (P), experienced VGPs, and a control group (C). Each was trained on eight cognitive performance tasks relevant to unmanned flight tasks. The results indicated that pilots significantly outperform the VGP and control groups on multi-attribute cognitive tasks (Tank mean: VGP = 465 +/- 1.046 vs. P = 203 +/- 0.237 vs. C = 351 +/- 0.601). However, the VGPs outperformed pilots on cognitive tests related to visually acquiring, identifying, and tracking targets (final score: VGP = 594.28 +/- 8.708 vs. P = 563.33 +/- 8.787 vs. C = 568.21 +/- 8.224). Likewise, both VGPs and pilots performed similarly on the UAS landing task, but outperformed the control group (glide slope: VGP = 40.982 +/- 3.244 vs. P = 30.461 +/- 2.251 vs. C = 57.060 +/- 4.407). Cognitive skills learned in video game play may transfer to novel environments and improve performance in UAS tasks over individuals with no video game experience.

  12. Model-based conifer crown surface reconstruction from multi-ocular high-resolution aerial imagery

    NASA Astrophysics Data System (ADS)

    Sheng, Yongwei

    2000-12-01

    Tree crown parameters such as width, height, shape and crown closure are desirable in forestry and ecological studies, but they are time-consuming and labor intensive to measure in the field. The stereoscopic capability of high-resolution aerial imagery provides a way to crown surface reconstruction. Existing photogrammetric algorithms designed to map terrain surfaces, however, cannot adequately extract crown surfaces, especially for steep conifer crowns. Considering crown surface reconstruction in a broader context of tree characterization from aerial images, we develop a rigorous perspective tree image formation model to bridge image-based tree extraction and crown surface reconstruction, and an integrated model-based approach to conifer crown surface reconstruction. Based on the fact that most conifer crowns are in a solid geometric form, conifer crowns are modeled as a generalized hemi-ellipsoid. Both the automatic and semi-automatic approaches are investigated to optimal tree model development from multi-ocular images. The semi-automatic 3D tree interpreter developed in this thesis is able to efficiently extract reliable tree parameters and tree models in complicated tree stands. This thesis starts with a sophisticated stereo matching algorithm, and incorporates tree models to guide stereo matching. The following critical problems are addressed in the model-based surface reconstruction process: (1) the problem of surface model composition from tree models, (2) the occlusion problem in disparity prediction from tree models, (3) the problem of integrating the predicted disparities into image matching, (4) the tree model edge effect reduction on the disparity map, (5) the occlusion problem in orthophoto production, and (6) the foreshortening problem in image matching, which is very serious for conifer crown surfaces. Solutions to the above problems are necessary for successful crown surface reconstruction. The model-based approach was applied to recover the

  13. Use of Aerial Hyperspectral Imaging For Monitoring Forest Health

    Treesearch

    Milton O. Smith; Nolan J. Hess; Stephen Gulick; Lori G. Eckhardt; Roger D. Menard

    2004-01-01

    This project evaluates the effectiveness of aerial hyperspectral digital imagery in the assessment of forest health of loblolly stands in central Alabama. The imagery covers 50 square miles, in Bibb and Hale Counties, south of Tuscaloosa, AL, which includes intensive managed forest industry sites and National Forest lands with multiple use objectives. Loblolly stands...

  14. Observing Spring and Fall Phenology in a Deciduous Forest with Aerial Drone Imagery.

    PubMed

    Klosterman, Stephen; Richardson, Andrew D

    2017-12-08

    Plant phenology is a sensitive indicator of the effects of global change on terrestrial ecosystems and controls the timing of key ecosystem functions including photosynthesis and transpiration. Aerial drone imagery and photogrammetric techniques promise to advance the study of phenology by enabling the creation of distortion-free orthomosaics of plant canopies at the landscape scale, but with branch-level image resolution. The main goal of this study is to determine the leaf life cycle events corresponding to phenological metrics derived from automated analyses based on color indices calculated from drone imagery. For an oak-dominated, temperate deciduous forest in the northeastern USA, we find that plant area index (PAI) correlates with a canopy greenness index during spring green-up, and a canopy redness index during autumn senescence. Additionally, greenness and redness metrics are significantly correlated with the timing of budburst and leaf expansion on individual trees in spring. However, we note that the specific color index for individual trees must be carefully chosen if new foliage in spring appears red, rather than green-which we observed for some oak trees. In autumn, both decreasing greenness and increasing redness correlate with leaf senescence. Maximum redness indicates the beginning of leaf fall, and the progression of leaf fall correlates with decreasing redness. We also find that cooler air temperature microclimates near a forest edge bordering a wetland advance the onset of senescence. These results demonstrate the use of drones for characterizing the organismic-level variability of phenology in a forested landscape and advance our understanding of which phenophase transitions correspond to color-based metrics derived from digital image analysis.

  15. Observing Spring and Fall Phenology in a Deciduous Forest with Aerial Drone Imagery

    PubMed Central

    Richardson, Andrew D.

    2017-01-01

    Plant phenology is a sensitive indicator of the effects of global change on terrestrial ecosystems and controls the timing of key ecosystem functions including photosynthesis and transpiration. Aerial drone imagery and photogrammetric techniques promise to advance the study of phenology by enabling the creation of distortion-free orthomosaics of plant canopies at the landscape scale, but with branch-level image resolution. The main goal of this study is to determine the leaf life cycle events corresponding to phenological metrics derived from automated analyses based on color indices calculated from drone imagery. For an oak-dominated, temperate deciduous forest in the northeastern USA, we find that plant area index (PAI) correlates with a canopy greenness index during spring green-up, and a canopy redness index during autumn senescence. Additionally, greenness and redness metrics are significantly correlated with the timing of budburst and leaf expansion on individual trees in spring. However, we note that the specific color index for individual trees must be carefully chosen if new foliage in spring appears red, rather than green—which we observed for some oak trees. In autumn, both decreasing greenness and increasing redness correlate with leaf senescence. Maximum redness indicates the beginning of leaf fall, and the progression of leaf fall correlates with decreasing redness. We also find that cooler air temperature microclimates near a forest edge bordering a wetland advance the onset of senescence. These results demonstrate the use of drones for characterizing the organismic-level variability of phenology in a forested landscape and advance our understanding of which phenophase transitions correspond to color-based metrics derived from digital image analysis. PMID:29292742

  16. The use of unmanned aerial vehicle imagery in intertidal monitoring

    NASA Astrophysics Data System (ADS)

    Konar, Brenda; Iken, Katrin

    2018-01-01

    Intertidal monitoring projects are often limited in their practicality because traditional methods such as visual surveys or removal of biota are often limited in the spatial extent for which data can be collected. Here, we used imagery from a small unmanned aerial vehicle (sUAV) to test their potential use in rocky intertidal and intertidal seagrass surveys in the northern Gulf of Alaska. Images captured by the sUAV in the high, mid and low intertidal strata on a rocky beach and within a seagrass bed were compared to data derived concurrently from observer visual surveys and to images taken by observers on the ground. Observer visual data always resulted in the highest taxon richness, but when observer data were aggregated to the lower taxonomic resolution obtained by the sUAV images, overall community composition was mostly similar between the two methods. Ground camera images and sUAV images yielded mostly comparable community composition despite the typically higher taxonomic resolution obtained by the ground camera. We conclude that monitoring goals or research questions that can be answered on a relatively coarse taxonomic level can benefit from an sUAV-based approach because it allows much larger spatial coverage within the time constraints of a low tide interval than is possible by observers on the ground. We demonstrated this large-scale applicability by using sUAV images to develop maps that show the distribution patterns and patchiness of seagrass.

  17. SAR imagery of the Grand Banks (Newfoundland) pack ice pack and its relationship to surface features

    NASA Technical Reports Server (NTRS)

    Argus, S. D.; Carsey, F. D.

    1988-01-01

    Synthetic Aperture Radar (SAR) data and aerial photographs were obtained over pack ice off the East Coast of Canada in March 1987 as part of the Labrador Ice Margin Experiment (LIMEX) pilot project. Examination of this data shows that although the pack ice off the Canadian East Coast appears essentially homogeneous to visible light imagery, two clearly defined zones of ice are apparent on C-band SAR imagery. To identify factors that create the zones seen on the radar image, aerial photographs were compared to the SAR imagery. Floe size data from the aerial photographs was compared to digital number values taken from SAR imagery of the same ice. The SAR data of the inner zone acquired three days apart over the melt period was also examined. The studies indicate that the radar response is governed by floe size and meltwater distribution.

  18. Remote sensing based detection of forested wetlands: An evaluation of LiDAR, aerial imagery, and their data fusion

    NASA Astrophysics Data System (ADS)

    Suiter, Ashley Elizabeth

    Multi-spectral imagery provides a robust and low-cost dataset for assessing wetland extent and quality over broad regions and is frequently used for wetland inventories. However in forested wetlands, hydrology is obscured by tree canopy making it difficult to detect with multi-spectral imagery alone. Because of this, classification of forested wetlands often includes greater errors than that of other wetlands types. Elevation and terrain derivatives have been shown to be useful for modelling wetland hydrology. But, few studies have addressed the use of LiDAR intensity data detecting hydrology in forested wetlands. Due the tendency of LiDAR signal to be attenuated by water, this research proposed the fusion of LiDAR intensity data with LiDAR elevation, terrain data, and aerial imagery, for the detection of forested wetland hydrology. We examined the utility of LiDAR intensity data and determined whether the fusion of Lidar derived data with multispectral imagery increased the accuracy of forested wetland classification compared with a classification performed with only multi-spectral image. Four classifications were performed: Classification A -- All Imagery, Classification B -- All LiDAR, Classification C -- LiDAR without Intensity, and Classification D -- Fusion of All Data. These classifications were performed using random forest and each resulted in a 3-foot resolution thematic raster of forested upland and forested wetland locations in Vermilion County, Illinois. The accuracies of these classifications were compared using Kappa Coefficient of Agreement. Importance statistics produced within the random forest classifier were evaluated in order to understand the contribution of individual datasets. Classification D, which used the fusion of LiDAR and multi-spectral imagery as input variables, had moderate to strong agreement between reference data and classification results. It was found that Classification A performed using all the LiDAR data and its derivatives

  19. Context-Based Urban Terrain Reconstruction from Uav-Videos for Geoinformation Applications

    NASA Astrophysics Data System (ADS)

    Bulatov, D.; Solbrig, P.; Gross, H.; Wernerus, P.; Repasi, E.; Heipke, C.

    2011-09-01

    Urban terrain reconstruction has many applications in areas of civil engineering, urban planning, surveillance and defense research. Therefore the needs of covering ad-hoc demand and performing a close-range urban terrain reconstruction with miniaturized and relatively inexpensive sensor platforms are constantly growing. Using (miniaturized) unmanned aerial vehicles, (M)UAVs, represents one of the most attractive alternatives to conventional large-scale aerial imagery. We cover in this paper a four-step procedure of obtaining georeferenced 3D urban models from video sequences. The four steps of the procedure - orientation, dense reconstruction, urban terrain modeling and geo-referencing - are robust, straight-forward, and nearly fully-automatic. The two last steps - namely, urban terrain modeling from almost-nadir videos and co-registration of models 6ndash; represent the main contribution of this work and will therefore be covered with more detail. The essential substeps of the third step include digital terrain model (DTM) extraction, segregation of buildings from vegetation, as well as instantiation of building and tree models. The last step is subdivided into quasi- intrasensorial registration of Euclidean reconstructions and intersensorial registration with a geo-referenced orthophoto. Finally, we present reconstruction results from a real data-set and outline ideas for future work.

  20. Geovisualisation of relief in a virtual reality system on the basis of low-level aerial imagery

    NASA Astrophysics Data System (ADS)

    Halik, Łukasz; Smaczyński, Maciej

    2017-12-01

    The aim of the following paper was to present the geomatic process of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the model into a virtual reality system (VR). The object of the study was a natural aggretage heap of an irregular shape and denivelations up to 11 m. Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20,000-m2-area. For further analyses, the researchers selected the point cloud with the best ratio of accuracy to output file size. This choice was made based on seven control points of the heap surveyed in the field and the corresponding points in the generated 3D model. The obtained several-centimetre differences between the control points in the field and the ones from the model might testify to the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. Finally, the chosen model was implemented into the VR system, which enables the most lifelike exploration of 3D terrain plasticity in real time, thanks to the first person view mode (FPV). In this mode, the user observes an object with the aid of a Head- mounted display (HMD), experiencing the geovisualisation from the inside, and virtually analysing the terrain as a direct animator of the observations.

  1. Satellite Imagery Assisted Road-Based Visual Navigation System

    NASA Astrophysics Data System (ADS)

    Volkova, A.; Gibbens, P. W.

    2016-06-01

    There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used

  2. Using high-resolution digital aerial imagery to map land cover

    USGS Publications Warehouse

    Dieck, J.J.; Robinson, Larry

    2014-01-01

    The Upper Midwest Environmental Sciences Center (UMESC) has used aerial photography to map land cover/land use on federally owned and managed lands for over 20 years. Until recently, that process used 23- by 23-centimeter (9- by 9-inch) analog aerial photos to classify vegetation along the Upper Mississippi River System, on National Wildlife Refuges, and in National Parks. With digital aerial cameras becoming more common and offering distinct advantages over analog film, UMESC transitioned to an entirely digital mapping process in 2009. Though not without challenges, this method has proven to be much more accurate and efficient when compared to the analog process.

  3. Reconstructing the archaeological landscape of Southern Dobrogea: integrating imagery

    NASA Astrophysics Data System (ADS)

    Oltean, I. A.; Hanson, W. S.

    2007-10-01

    The recent integrated aerial photographic assessment of Southern Dobrogea, Romania) is part of the first author's British Academy funded research programme 'Contextualizing change on the Lower Danube: Roman impact on Daco-Getic landscapes'. This seeks to study the effect of the Roman conquest and occupation on the native Daco-Getic settlement pattern on the Lower Danube. The methodology involves integrating a range of remotely sensed imagery including: low altitude oblique aerial photographs, obtained through traditional aerial reconnaissance; medium altitude vertical photographs produced by German, British and American military reconnaissance during the Second World War, selected from The Aerial Reconnaissance Achive at Keele University; and high altitude de-classified military satellite imagery (Corona) from the 1960s, acquired from the USGS. The value of this approach lies not just in that it enables extensive detailed mapping of large archaeological landscapes in Romania for the first time, but also that it allows the recording of archaeological features permanently destroyed by more recent development across wide areas. This paper presents some results and addresses some of the problems raised by each method of data acquisition.

  4. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    PubMed

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-08-12

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.

  5. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping

    PubMed Central

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-01-01

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960

  6. Aerial surveillance vehicles augment security at shipping ports

    NASA Astrophysics Data System (ADS)

    Huck, Robert C.; Al Akkoumi, Muhammad K.; Cheng, Samuel; Sluss, James J., Jr.; Landers, Thomas L.

    2008-10-01

    With the ever present threat to commerce, both politically and economically, technological innovations provide a means to secure the transportation infrastructure that will allow efficient and uninterrupted freight-flow operations for trade. Currently, freight coming into United States ports is "spot checked" upon arrival and stored in a container yard while awaiting the next mode of transportation. For the most part, only fences and security patrols protect these container storage yards. To augment these measures, the authors propose the use of aerial surveillance vehicles equipped with video cameras and wireless video downlinks to provide a birds-eye view of port facilities to security control centers and security patrols on the ground. The initial investigation described in this paper demonstrates the use of unmanned aerial surveillance vehicles as a viable method for providing video surveillance of container storage yards. This research provides the foundation for a follow-on project to use autonomous aerial surveillance vehicles coordinated with autonomous ground surveillance vehicles for enhanced port security applications.

  7. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  8. Digital Motion Imagery, Interoperability Challenges for Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2012-01-01

    With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.

  9. Use of video observation and motor imagery on jumping performance in national rhythmic gymnastics athletes.

    PubMed

    Battaglia, Claudia; D'Artibale, Emanuele; Fiorilli, Giovanni; Piazza, Marina; Tsopani, Despina; Giombini, Arrigo; Calcagno, Giuseppe; di Cagno, Alessandra

    2014-12-01

    The aim of this study was to evaluate whether a mental training protocol could improve gymnastic jumping performance. Seventy-two rhythmic gymnasts were randomly divided into an experimental and control group. At baseline, experimental group completed the Movement Imagery Questionnaire Revised (MIQ-R) to assess the gymnast ability to generate movement imagery. A repeated measures design was used to compare two different types of training aimed at improving jumping performance: (a) video observation and PETTLEP mental training associated with physical practice, for the experimental group, and (b) physical practice alone for the control group. Before and after six weeks of training, their jumping performance was measured using the Hopping Test (HT), Drop Jump (DJ), and Counter Movement Jump (CMJ). Results revealed differences between jumping parameters F(1,71)=11.957; p<.01, and between groups F(1,71)=10.620; p<.01. In the experimental group there were significant correlations between imagery ability and the post-training Flight Time of the HT, r(34)=-.295, p<.05 and the DJ, r(34)=-.297, p<.05. The application of the protocol described herein was shown to improve jumping performance, thereby preserving the elite athlete's energy for other tasks. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Mapping and characterizing selected canopy tree species at the Angkor World Heritage site in Cambodia using aerial data.

    PubMed

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables.

  11. Mapping and Characterizing Selected Canopy Tree Species at the Angkor World Heritage Site in Cambodia Using Aerial Data

    PubMed Central

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia’s tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman’s rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables. PMID:25902148

  12. Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences

    NASA Astrophysics Data System (ADS)

    Wierzbicki, Damian

    2017-12-01

    Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.

  13. Online Aerial Terrain Mapping for Ground Robot Navigation

    PubMed Central

    Peterson, John; Chaudhry, Haseeb; Abdelatty, Karim; Bird, John; Kochersberger, Kevin

    2018-01-01

    This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle’s overhead view to inform the ground vehicle’s path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These terrain classes are used to estimate relative navigation costs for the ground vehicle so energy-efficient paths may be generated and then executed. The two vehicles are registered in a common coordinate frame using a real-time kinematic global positioning system (RTK GPS) and all image processing is performed onboard the unmanned aerial vehicle, which minimizes the data exchanged between the vehicles. This paper describes the architecture of the system and quantifies the registration errors between the vehicles. PMID:29461496

  14. Online Aerial Terrain Mapping for Ground Robot Navigation.

    PubMed

    Peterson, John; Chaudhry, Haseeb; Abdelatty, Karim; Bird, John; Kochersberger, Kevin

    2018-02-20

    This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle's overhead view to inform the ground vehicle's path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These terrain classes are used to estimate relative navigation costs for the ground vehicle so energy-efficient paths may be generated and then executed. The two vehicles are registered in a common coordinate frame using a real-time kinematic global positioning system (RTK GPS) and all image processing is performed onboard the unmanned aerial vehicle, which minimizes the data exchanged between the vehicles. This paper describes the architecture of the system and quantifies the registration errors between the vehicles.

  15. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system.

    PubMed

    Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe

    2016-03-01

    Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach

  16. Mapping Urban Tree Canopy Coverage and Structure using Data Fusion of High Resolution Satellite Imagery and Aerial Lidar

    NASA Astrophysics Data System (ADS)

    Elmes, A.; Rogan, J.; Williams, C. A.; Martin, D. G.; Ratick, S.; Nowak, D.

    2015-12-01

    Urban tree canopy (UTC) coverage is a critical component of sustainable urban areas. Trees provide a number of important ecosystem services, including air pollution mitigation, water runoff control, and aesthetic and cultural values. Critically, urban trees also act to mitigate the urban heat island (UHI) effect by shading impervious surfaces and via evaporative cooling. The cooling effect of urban trees can be seen locally, with individual trees reducing home HVAC costs, and at a citywide scale, reducing the extent and magnitude of an urban areas UHI. In order to accurately model the ecosystem services of a given urban forest, it is essential to map in detail the condition and composition of these trees at a fine scale, capturing individual tree crowns and their vertical structure. This paper presents methods for delineating UTC and measuring canopy structure at fine spatial resolution (<1m). These metrics are essential for modeling the HVAC benefits from UTC for individual homes, and for assessing the ecosystem services for entire urban areas. Such maps have previously been made using a variety of methods, typically relying on high resolution aerial or satellite imagery. This paper seeks to contribute to this growing body of methods, relying on a data fusion method to combine the information contained in high resolution WorldView-3 satellite imagery and aerial lidar data using an object-based image classification approach. The study area, Worcester, MA, has recently undergone a large-scale tree removal and reforestation program, following a pest eradication effort. Therefore, the urban canopy in this location provides a wide mix of tree age class and functional type, ideal for illustrating the effectiveness of the proposed methods. Early results show that the object-based classifier is indeed capable of identifying individual tree crowns, while continued research will focus on extracting crown structural characteristics using lidar-derived metrics. Ultimately

  17. Smoking in Video Games: A Systematic Review.

    PubMed

    Forsyth, Susan R; Malone, Ruth E

    2016-06-01

    Video games are played by a majority of adolescents, yet little is known about whether and how video games are associated with smoking behavior and attitudes. This systematic review examines research on the relationship between video games and smoking. We searched MEDLINE, psycINFO, and Web of Science through August 20, 2014. Twenty-four studies met inclusion criteria. Studies were synthesized qualitatively in four domains: the prevalence and incidence of smoking imagery in video games (n = 6), video game playing and smoking behavior (n = 11), video game addiction and tobacco addiction (n = 5) and genre-specific game playing and smoking behavior (n = 3). Tobacco content was present in a subset of video games. The literature is inconclusive as to whether exposure to video games as a single construct is associated with smoking behavior. Four of five studies found an association between video game addiction and smoking. For genre-specific game playing, studies suggest that the type of game played affected association with smoking behavior. Research on how playing video games influences adolescents' perceptions of smoking and smoking behaviors is still in its nascence. Further research is needed to understand how adolescents respond to viewing and manipulating tobacco imagery, and whether engaging in game smoking translates into changes in real-world attitudes or behavior. Smoking imagery in video games may contribute to normalizing adolescent smoking. A large body of research has shown that smoking imagery in a variety of media types contributes to adolescent smoking uptake and the normalization of smoking behavior, and almost 90% of adolescents play video games, yet there has never been a published systematic review of the literature on this important topic. This is the first systematic review to examine the research on tobacco and video games.We found that tobacco imagery is indeed present in video games, the relationship between video game playing and smoking

  18. Estimating chlorophyll with thermal and broadband multispectral high resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture

    NASA Astrophysics Data System (ADS)

    Elarab, Manal; Ticlavilca, Andres M.; Torres-Rua, Alfonso F.; Maslova, Inga; McKee, Mac

    2015-12-01

    Precision agriculture requires high-resolution information to enable greater precision in the management of inputs to production. Actionable information about crop and field status must be acquired at high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high spatial resolution imagery was obtained through the use of a small, unmanned aerial system called AggieAirTM. Simultaneously with the AggieAir flights, intensive ground sampling for plant chlorophyll was conducted at precisely determined locations. This study reports the application of a relevance vector machine coupled with cross validation and backward elimination to a dataset composed of reflectance from high-resolution multi-spectral imagery (VIS-NIR), thermal infrared imagery, and vegetative indices, in conjunction with in situ SPAD measurements from which chlorophyll concentrations were derived, to estimate chlorophyll concentration from remotely sensed data at 15-cm resolution. The results indicate that a relevance vector machine with a thin plate spline kernel type and kernel width of 5.4, having LAI, NDVI, thermal and red bands as the selected set of inputs, can be used to spatially estimate chlorophyll concentration with a root-mean-squared-error of 5.31 μg cm-2, efficiency of 0.76, and 9 relevance vectors.

  19. [Retrieval of crown closure of moso bamboo forest using unmanned aerial vehicle (UAV) remotely sensed imagery based on geometric-optical model].

    PubMed

    Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long

    2015-05-01

    This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.

  20. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  1. Applications of thermal infrared imagery for energy conservation and environmental surveys

    NASA Technical Reports Server (NTRS)

    Carney, J. R.; Vogel, T. C.; Howard, G. E., Jr.; Love, E. R.

    1977-01-01

    The survey procedures, developed during the winter and summer of 1976, employ color and color infrared aerial photography, thermal infrared imagery, and a handheld infrared imaging device. The resulting imagery was used to detect building heat losses, deteriorated insulation in built-up type building roofs, and defective underground steam lines. The handheld thermal infrared device, used in conjunction with the aerial thermal infrared imagery, provided a method for detecting and locating those roof areas that were underlain with wet insulation. In addition, the handheld infrared device was employed to conduct a survey of a U.S. Army installation's electrical distribution system under full operating loads. This survey proved to be cost effective procedure for detecting faulty electrical insulators and connections that if allowed to persist could have resulted in both safety hazards and loss in production.

  2. Delineating wetland catchments and modeling hydrologic connectivity using lidar data and aerial imagery

    NASA Astrophysics Data System (ADS)

    Wu, Qiusheng; Lane, Charles R.

    2017-07-01

    In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In reality, however, many depressions in the DEM are actual wetland landscape features with seasonal to permanent inundation patterning characterized by nested hierarchical structures and dynamic filling-spilling-merging surface-water hydrological processes. Differentiating and appropriately processing such ecohydrologically meaningful features remains a major technical terrain-processing challenge, particularly as high-resolution spatial data are increasingly used to support modeling and geographic analysis needs. The objectives of this study were to delineate hierarchical wetland catchments and model their hydrologic connectivity using high-resolution lidar data and aerial imagery. The graph-theory-based contour tree method was used to delineate the hierarchical wetland catchments and characterize their geometric and topological properties. Potential hydrologic connectivity between wetlands and streams were simulated using the least-cost-path algorithm. The resulting flow network delineated potential flow paths connecting wetland depressions to each other or to the river network on scales finer than those available through the National Hydrography Dataset. The results demonstrated that our proposed framework is promising for improving overland flow simulation and hydrologic connectivity analysis.

  3. Structural geologic interpretations from radar imagery

    USGS Publications Warehouse

    Reeves, Robert G.

    1969-01-01

    Certain structural geologic features may be more readily recognized on sidelooking airborne radar (SLAR) images than on conventional aerial photographs, other remote sensor imagery, or by ground observations. SLAR systems look obliquely to one or both sides and their images resemble aerial photographs taken at low sun angle with the sun directly behind the camera. They differ from air photos in geometry, resolution, and information content. Radar operates at much lower frequencies than the human eye, camera, or infrared sensors, and thus "sees" differently. The lower frequency enables it to penetrate most clouds and some precipitation, haze, dust, and some vegetation. Radar provides its own illumination, which can be closely controlled in intensity and frequency. It is narrow band, or essentially monochromatic. Low relief and subdued features are accentuated when viewed from the proper direction. Runs over the same area in significantly different directions (more than 45° from each other), show that images taken in one direction may emphasize features that are not emphasized on those taken in the other direction; optimum direction is determined by those features which need to be emphasized for study purposes. Lineaments interpreted as faults stand out on radar imagery of central and western Nevada; folded sedimentary rocks cut by faults can be clearly seen on radar imagery of northern Alabama. In these areas, certain structural and stratigraphic features are more pronounced on radar images than on conventional photographs; thus radar imagery materially aids structural interpretation.

  4. Visualization of simulated urban spaces: inferring parameterized generation of streets, parcels, and aerial imagery.

    PubMed

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul

    2009-01-01

    Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.

  5. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    PubMed

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  6. Orthorectification, mosaicking, and analysis of sub-decimeter resolution UAV imagery for rangeland monitoring

    USDA-ARS?s Scientific Manuscript database

    Unmanned aerial vehicles (UAVs) offer an attractive platform for acquiring imagery for rangeland monitoring. UAVs can be deployed quickly and repeatedly, and they can obtain sub-decimeter resolution imagery at lower image acquisition costs than with piloted aircraft. Low flying heights result in ima...

  7. Application of airborne thermal imagery to surveys of Pacific walrus

    USGS Publications Warehouse

    Burn, D.M.; Webber, M.A.; Udevitz, M.S.

    2006-01-01

    We conducted tests of airborne thermal imagery of Pacific walrus to determine if this technology can be used to detect walrus groups on sea ice and estimate the number of walruses present in each group. In April 2002 we collected thermal imagery of 37 walrus groups in the Bering Sea at spatial resolutions ranging from 1-4 m. We also collected high-resolution digital aerial photographs of the same groups. Walruses were considerably warmer than the background environment of ice, snow, and seawater and were easily detected in thermal imagery. We found a significant linear relation between walrus group size and the amount of heat measured by the thermal sensor at all 4 spatial resolutions tested. This relation can be used in a double-sampling framework to estimate total walrus numbers from a thermal survey of a sample of units within an area and photographs from a subsample of the thermally detected groups. Previous methods used in visual aerial surveys of Pacific walrus have sampled only a small percentage of available habitat, resulting in population estimates with low precision. Results of this study indicate that an aerial survey using a thermal sensor can cover as much as 4 times the area per hour of flight time with greater reliability than visual observation.

  8. ERTS imagery for ground-water investigations

    USGS Publications Warehouse

    Moore, Gerald K.; Deutsch, Morris

    1975-01-01

    ERTS imagery offers the first opportunity to apply moderately high-resolution satellite data to the nationwide study of water resources. This imagery is both a tool and a form of basic data. Like other tools and basic data, it should be considered for use in ground-water investigations. The main advantage of its use will be to reduce the need for field work. In addition, however, broad regional features may be seen easily on ERTS imagery, whereas they would be difficult or impossible to see on the ground or on low-altitude aerial photographs. Some present and potential uses of ERTS imagery are to locate new aquifers, to study aquifer recharge and discharge, to estimate ground-water pumpage for irrigation, to predict the location and type of aquifer management problems, and to locate and monitor strip mines which commonly are sources for acid mine drainage. In many cases, boundaries which are gradational on the ground appear to be sharp on ERTS imagery. Initial results indicate that the accuracy of maps produced from ERTS imagery is completely adequate for some purposes.

  9. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  10. Photovoltaic panel extraction from very high-resolution aerial imagery using region-line primitive association analysis and template matching

    NASA Astrophysics Data System (ADS)

    Wang, Min; Cui, Qi; Sun, Yujie; Wang, Qiao

    2018-07-01

    In object-based image analysis (OBIA), object classification performance is jointly determined by image segmentation, sample or rule setting, and classifiers. Typically, as a crucial step to obtain object primitives, image segmentation quality significantly influences subsequent feature extraction and analyses. By contrast, template matching extracts specific objects from images and prevents shape defects caused by image segmentation. However, creating or editing templates is tedious and sometimes results in incomplete or inaccurate templates. In this study, we combine OBIA and template matching techniques to address these problems and aim for accurate photovoltaic panel (PVP) extraction from very high-resolution (VHR) aerial imagery. The proposed method is based on the previously proposed region-line primitive association framework, in which complementary information between region (segment) and line (straight line) primitives is utilized to achieve a more powerful performance than routine OBIA. Several novel concepts, including the mutual fitting ratio and best-fitting template based on region-line primitive association analyses, are proposed. Automatic template generation and matching method for PVP extraction from VHR imagery are designed for concept and model validation. Results show that the proposed method can successfully extract PVPs without any user-specified matching template or training sample. High user independency and accuracy are the main characteristics of the proposed method in comparison with routine OBIA and template matching techniques.

  11. Low-altitude aerial color digital photographic survey of the San Andreas Fault

    USGS Publications Warehouse

    Lynch, David K.; Hudnut, Kenneth W.; Dearborn, David S.P.

    2010-01-01

    Ever since 1858, when Gaspard-Félix Tournachon (pen name Félix Nadar) took the first aerial photograph (Professional Aerial Photographers Association 2009), the scientific value and popular appeal of such pictures have been widely recognized. Indeed, Nadar patented the idea of using aerial photographs in mapmaking and surveying. Since then, aerial imagery has flourished, eventually making the leap to space and to wavelengths outside the visible range. Yet until recently, the availability of such surveys has been limited to technical organizations with significant resources. Geolocation required extensive time and equipment, and distribution was costly and slow. While these situations still plague older surveys, modern digital photography and lidar systems acquire well-calibrated and easily shared imagery, although expensive, platform-specific software is sometimes still needed to manage and analyze the data. With current consumer-level electronics (cameras and computers) and broadband internet access, acquisition and distribution of large imaging data sets are now possible for virtually anyone. In this paper we demonstrate a simple, low-cost means of obtaining useful aerial imagery by reporting two new, high-resolution, low-cost, color digital photographic surveys of selected portions of the San Andreas fault in California. All pictures are in standard jpeg format. The first set of imagery covers a 92-km-long section of the fault in Kern and San Luis Obispo counties and includes the entire Carrizo Plain. The second covers the region from Lake of the Woods to Cajon Pass in Kern, Los Angeles, and San Bernardino counties (151 km) and includes Lone Pine Canyon soon after the ground was largely denuded by the Sheep Fire of October 2009. The first survey produced a total of 1,454 oblique digital photographs (4,288 x 2,848 pixels, average 6 Mb each) and the second produced 3,762 nadir images from an elevation of approximately 150 m above ground level (AGL) on the

  12. INFLUENCE OF REMOTE SENSING IMAGERY SOURCE ON QUANTIFICATION OF RIPARIAN LAND COVER/LAND USE

    EPA Science Inventory

    This paper compares approaches to quantifying land cover/land use (LCLU) in riparian corridors of 23 watersheds in Oregon's Willamette Valley using aerial photography (AP) and Thematic Mapper (TM) imagery. For each imagery source, we quantified LCLU adjacent to stream networks ac...

  13. Mapping trees outside forests using high-resolution aerial imagery: a comparison of pixel- and object-based classification approaches.

    PubMed

    Meneguzzo, Dacia M; Liknes, Greg C; Nelson, Mark D

    2013-08-01

    Discrete trees and small groups of trees in nonforest settings are considered an essential resource around the world and are collectively referred to as trees outside forests (ToF). ToF provide important functions across the landscape, such as protecting soil and water resources, providing wildlife habitat, and improving farmstead energy efficiency and aesthetics. Despite the significance of ToF, forest and other natural resource inventory programs and geospatial land cover datasets that are available at a national scale do not include comprehensive information regarding ToF in the United States. Additional ground-based data collection and acquisition of specialized imagery to inventory these resources are expensive alternatives. As a potential solution, we identified two remote sensing-based approaches that use free high-resolution aerial imagery from the National Agriculture Imagery Program (NAIP) to map all tree cover in an agriculturally dominant landscape. We compared the results obtained using an unsupervised per-pixel classifier (independent component analysis-[ICA]) and an object-based image analysis (OBIA) procedure in Steele County, Minnesota, USA. Three types of accuracy assessments were used to evaluate how each method performed in terms of: (1) producing a county-level estimate of total tree-covered area, (2) correctly locating tree cover on the ground, and (3) how tree cover patch metrics computed from the classified outputs compared to those delineated by a human photo interpreter. Both approaches were found to be viable for mapping tree cover over a broad spatial extent and could serve to supplement ground-based inventory data. The ICA approach produced an estimate of total tree cover more similar to the photo-interpreted result, but the output from the OBIA method was more realistic in terms of describing the actual observed spatial pattern of tree cover.

  14. Scaling Sap Flow Results Over Wide Areas Using High-Resolution Aerial Multispectral Digital Imaging, Leaf Area Index (LAI) and MODIS Satellite Imagery in Saltcedar Stands on the Lower Colorado River

    NASA Astrophysics Data System (ADS)

    Murray, R.; Neale, C.; Nagler, P. L.; Glenn, E. P.

    2008-12-01

    Heat-balance sap flow sensors provide direct estimates of water movement through plant stems and can be used to accurately measure leaf-level transpiration (EL) and stomatal conductance (GS) over time scales ranging from 20-minutes to a month or longer in natural stands of plants. However, their use is limited to relatively small branches on shrubs or trees, as the gauged stem section needs to be uniformly heated by the heating coil to produce valid measurements. This presents a scaling problem in applying the results to whole plants, stands of plants, and larger landscape areas. We used high-resolution aerial multispectral digital imaging with green, red and NIR bands as a bridge between ground measurements of EL and GS, and MODIS satellite imagery of a flood plain on the Lower Colorado River dominated by saltcedar (Tamarix ramosissima). Saltcedar is considered to be a high-water-use plant, and saltcedar removal programs have been proposed to salvage water. Hence, knowledge of actual saltcedar ET rates is needed on western U.S. rivers. Scaling EL and GS to large landscape units requires knowledge of leaf area index (LAI) over large areas. We used a LAI model developed for riparian habitats on Bosque del Apache, New Mexico, to estimate LAI at our study site on the Colorado River. We compared the model estimates to ground measurements of LAI, determined with a Li-Cor LAI-2000 Plant Canopy Analyzer calibrated by leaf harvesting to determine Specific Leaf Area (SLA) (m2 leaf area per g dry weight leaves) of the different species on the floodplain. LAI could be adequately predicted from NDVI from aerial multispectral imagery and could be cross-calibrated with MODIS NDVI and EVI. Hence, we were able to project point measurements of sap flow and LAI over multiple years and over large areas of floodplain using aerial multispectral imagery as a bridge between ground and satellite data. The methods are applicable to riparian corridors throughout the western U.S.

  15. Airborne Lidar and Aerial Imagery to Assess Potential Habitats for the Desert Tortoise (Gopherus agassizii)

    NASA Astrophysics Data System (ADS)

    Young, Michael; Andrews, John; Caldwell, Todd; Saylam, Kutalmis

    2017-04-01

    The desert Southwestern United States serves as the host to the habitat for several threatened and endangered species, one of which is the desert tortoise (Gopherus agassizii). The goal in this study was to develop a fine-scale, remote sensing-based model that predicts potential habitat locations of G. agassizii in the Boulder City (Nevada) Conversation Easement area (35,500 hectares). This was done by analyzing airborne Lidar data (5-7 points/m2) and color imagery (4 bands, 0.15 m resolution) and determining percent vegetation cover, shrub height and area, NDVI, and several geomorphic characteristics including slope, azimuth, roughness, etc. Other field data used herein include estimates of canopy area and species richness using 1271 line transects, and shrub height and canopy area using plant-specific measurements of 200 plants. Larrea tridentada and Ambrosia dumosa shrubs were identified using an algorithm that obtained an optimum combination of NDVI and average reflectance of the four bands (IR, R, G, B) from pixels in each image. Results identified more than 65 million shrubs across the study area, and indicate that percent vegetation cover from the aerial imagery across the site (13.92%) compared favorably (14.52%) to the estimate obtained from the line transects, though the lidar method yielded shrub heights approximately 60% of measured shrub heights. Plants and landscape properties were combined with known locations of tortoise burrows (visually observed in 2014), yielding a predictive model of potential tortoise habitats. Masks were created using roughness coefficient, slope percent, azimuth of burrow openings, elevation and percent ground cover to isolate areas more likely to host habitats. Combined together, the masks isolated 55% of the total survey area, which would help target future field surveys. Overall, the vegetation map superimposed onto the background soil data could estimate the location of tortoise burrows.

  16. A study of video frame rate on the perception of moving imagery detail

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    The rate at which each frame of color moving video imagery is displayed was varied in small steps to determine what is the minimal acceptable frame rate for life scientists viewing white rats within a small enclosure. Two, twenty five second-long scenes (slow and fast animal motions) were evaluated by nine NASA principal investigators and animal care technicians. The mean minimum acceptable frame rate across these subjects was 3.9 fps both for the slow and fast moving animal scenes. The highest single trial frame rate averaged across all subjects for the slow and the fast scene was 6.2 and 4.8, respectively. Further research is called for in which frame rate, image size, and color/gray scale depth are covaried during the same observation period.

  17. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    PubMed Central

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  18. The remote characterization of vegetation using Unmanned Aerial Vehicle photography

    USDA-ARS?s Scientific Manuscript database

    Unmanned Aerial Vehicles (UAVs) can fly in place of piloted aircraft to gather remote sensing information on vegetation characteristics. The type of sensors flown depends on the instrument payload capacity available, so that, depending on the specific UAV, it is possible to obtain video, aerial phot...

  19. Sediment Sampling in Estuarine Mudflats with an Aerial-Ground Robotic Team

    PubMed Central

    Deusdado, Pedro; Guedes, Magno; Silva, André; Marques, Francisco; Pinto, Eduardo; Rodrigues, Paulo; Lourenço, André; Mendonça, Ricardo; Santana, Pedro; Corisco, José; Almeida, Susana Marta; Portugal, Luís; Caldeira, Raquel; Barata, José; Flores, Luis

    2016-01-01

    This paper presents a robotic team suited for bottom sediment sampling and retrieval in mudflats, targeting environmental monitoring tasks. The robotic team encompasses a four-wheel-steering ground vehicle, equipped with a drilling tool designed to be able to retain wet soil, and a multi-rotor aerial vehicle for dynamic aerial imagery acquisition. On-demand aerial imagery, properly fused on an aerial mosaic, is used by remote human operators for specifying the robotic mission and supervising its execution. This is crucial for the success of an environmental monitoring study, as often it depends on human expertise to ensure the statistical significance and accuracy of the sampling procedures. Although the literature is rich on environmental monitoring sampling procedures, in mudflats, there is a gap as regards including robotic elements. This paper closes this gap by also proposing a preliminary experimental protocol tailored to exploit the capabilities offered by the robotic system. Field trials in the south bank of the river Tagus’ estuary show the ability of the robotic system to successfully extract and transport bottom sediment samples for offline analysis. The results also show the efficiency of the extraction and the benefits when compared to (conventional) human-based sampling. PMID:27618060

  20. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-06-23

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  1. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    PubMed Central

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  2. Review of the SAFARI 2000 RC-10 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Myers, Jeff; Shelton, Gary; Annegarn, Harrold; Peterson, David L. (Technical Monitor)

    2001-01-01

    This presentation will review the aerial photography collected by the NASA ER-2 aircraft during the SAFARI (Southern African Regional Science Initiative) year 2000 campaign. It will include specifications on the camera and film, and will show examples of the imagery. It will also detail the extent of coverage, and the procedures to obtain film products from the South African government. Also included will be some sample applications of aerial photography for various environmental applications, and its use in augmenting other SAFARI data sets.

  3. "A" Is for Aerial Maps and Art

    ERIC Educational Resources Information Center

    Todd, Reese H.; Delahunty, Tina

    2007-01-01

    The technology of satellite imagery and remote sensing adds a new dimension to teaching and learning about maps with elementary school children. Just a click of the mouse brings into view some images of the world that could only be imagined a generation ago. Close-up aerial pictures of the school and neighborhood quickly catch the interest of…

  4. Aerial image databases for pipeline rights-of-way management

    NASA Astrophysics Data System (ADS)

    Jadkowski, Mark A.

    1996-03-01

    Pipeline companies that own and manage extensive rights-of-way corridors are faced with ever-increasing regulatory pressures, operating issues, and the need to remain competitive in today's marketplace. Automation has long been an answer to the problem of having to do more work with less people, and Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) solutions have been implemented at several pipeline companies. Until recently, the ability to cost-effectively acquire and incorporate up-to-date aerial imagery into these computerized systems has been out of the reach of most users. NASA's Earth Observations Commercial Applications Program (EOCAP) is providing a means by which pipeline companies can bridge this gap. The EOCAP project described in this paper includes a unique partnership with NASA and James W. Sewall Company to develop an aircraft-mounted digital camera system and a ground-based computer system to geometrically correct and efficiently store and handle the digital aerial images in an AM/FM/GIS environment. This paper provides a synopsis of the project, including details on (1) the need for aerial imagery, (2) NASA's interest and role in the project, (3) the design of a Digital Aerial Rights-of-Way Monitoring System, (4) image georeferencing strategies for pipeline applications, and (5) commercialization of the EOCAP technology through a prototype project at Algonquin Gas Transmission Company which operates major gas pipelines in New England, New York, and New Jersey.

  5. Aerial vehicles collision avoidance using monocular vision

    NASA Astrophysics Data System (ADS)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  6. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  7. Looking for an old aerial photograph

    USGS Publications Warehouse

    ,

    1997-01-01

    Attempts to photograph the surface of the Earth date from the 1800's, when photographers attached cameras to balloons, kites, and even pigeons. Today, aerial photographs and satellite images are commonplace. The rate of acquiring aerial photographs and satellite images has increased rapidly in recent years. Views of the Earth obtained from aircraft or satellites have become valuable tools to Government resource planners and managers, land-use experts, environmentalists, engineers, scientists, and a wide variety of other users. Many people want historical aerial photographs for business or personal reasons. They may want to locate the boundaries of an old farm or a piece of family property. Or they may want a photograph as a record of changes in their neighborhood, or as a gift. The U.S. Geological Survey (USGS) maintains the Earth Science Information Centers (ESIC?s) to sell aerial photographs, remotely sensed images from satellites, a wide array of digital geographic and cartographic data, as well as the Bureau?s wellknown maps. Declassified photographs from early spy satellites were recently added to the ESIC offerings of historical images. Using the Aerial Photography Summary Record System database, ESIC researchers can help customers find imagery in the collections of other Federal agencies and, in some cases, those of private companies that specialize in esoteric products.

  8. Geomorphological relationships through the use of 2-D seismic reflection data, Lidar, and aerial imagery

    NASA Astrophysics Data System (ADS)

    Alesce, Meghan Elizabeth

    Barrier Islands are crucial in protecting coastal environments. This study focuses on Dauphin Island, Alabama, located within the Northern Gulf of Mexico (NGOM) Barrier Island complex. It is one of many islands serving as natural protection for NGOM ecosystems and coastal cities. The NGOM barrier islands formed at 4 kya in response to a decrease in rate of sea level rise. The morphology of these islands changes with hurricanes, anthropogenic activity, and tidal and wave action. This study focuses on ancient incised valleys and and the impact on island morphology on hurricane breaches. Using high frequency 2-D seismic reflection data four horizons, including the present seafloor, were interpreted. Subaerial portions of Dauphin Island were imaged using Lidar data and aerial imagery over a ten-year time span, as well as historical maps. Historical shorelines of Dauphin Island were extracted from aerial imagery and historical maps, and were compared to the location of incised valleys seen within the 2-D seismic reflection data. Erosion and deposition volumes of Dauphin Island from 1998 to 2010 (the time span covering hurricanes Ivan and Katrina) in the vicinity of Katrina Cut and Pelican Island were quantified using Lidar data. For the time period prior to Hurricane Ivan an erosional volume of 46,382,552 m3 and depositional volume of 16,113.6 m3 were quantified from Lidar data. The effects of Hurricane Ivan produced a total erosion volume of 4,076,041.5 m3. The erosional and depositional volumes of Katrina Cut being were 7,562,068.5 m3 and 510,936.7 m3, respectively. More volume change was found within Pelican Pass. For the period between hurricanes Ivan and Katrina the erosion volume was 595,713.8 m3. This was mostly located within Katrina Cut. Total deposition for the same period, including in Pelican Pass, was 15,353,961 m3. Hurricane breaches were compared to ancient incised valleys seen within the 2-D seismic reflection results. Breaches from hurricanes from 1849

  9. Open set recognition of aircraft in aerial imagery using synthetic template models

    NASA Astrophysics Data System (ADS)

    Bapst, Aleksander B.; Tran, Jonathan; Koch, Mark W.; Moya, Mary M.; Swahn, Robert

    2017-05-01

    Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.

  10. Satellite imagery and discourses of transparency

    NASA Astrophysics Data System (ADS)

    Harris, Chad Vincent

    In the last decade there has been a dramatic increase in satellite imagery available in the commercial marketplace and to the public in general. Satellite imagery systems and imagery archives, a knowledge domain formally monopolized by nation states, have become available to the public, both from declassified intelligence data and from fully integrated commercial vendors who create and market imagery data. Some of these firms have recently launched their own satellite imagery systems and created rather large imagery "architectures" that threaten to rival military reconnaissance systems. The increasing resolution of the imagery and the growing expertise of software and imagery interpretation developers has engendered a public discourse about the potentials for increased transparency in national and global affairs. However, transparency is an attribute of satellite remote sensing and imagery production that is taken for granted in the debate surrounding the growing public availability of high-resolution satellite imagery. This paper examines remote sensing and military photo reconnaissance imagery technology and the production of satellite imagery in the interests of contemplating the complex connections between imagery satellites, historically situated discourses about democratic and global transparency, and the formation and maintenance of nation state systems. Broader historical connections will also be explored between satellite imagery and the history of the use of cartographic and geospatial technologies in the formation and administrative control of nation states and in the discursive formulation of national identity. Attention will be on the technology itself as a powerful social actor through its connection to both national sovereignty and transcendent notions of scientific objectivity. The issues of the paper will be explored through a close look at aerial photography and satellite imagery both as communicative tools of power and as culturally relevant

  11. Utility of a scanning densitometer in analyzing remotely sensed imagery

    NASA Technical Reports Server (NTRS)

    Dooley, J. T.

    1976-01-01

    The utility of a scanning densitometer for analyzing imagery in the NASA Lewis Research Center's regional remote sensing program was evaluated. Uses studied include: (1) quick-look screening of imagery by means of density slicing, magnification, color coding, and edge enhancement; (2) preliminary category classification of both low- and high-resolution data bases; and (3) quantitative measurement of the extent of features within selected areas. The densitometer was capable of providing fast, convenient, and relatively inexpensive preliminary analysis of aerial and satellite photography and scanner imagery involving land cover, water quality, strip mining, and energy conservation.

  12. First video rate imagery from a 32-channel 22-GHz aperture synthesis passive millimetre wave imager

    NASA Astrophysics Data System (ADS)

    Salmon, Neil A.; Macpherson, Rod; Harvey, Andy; Hall, Peter; Hayward, Steve; Wilkinson, Peter; Taylor, Chris

    2011-11-01

    The first video rate imagery from a proof-of-concept 32-channel 22 GHz aperture synthesis imager is reported. This imager has been brought into operation over the first half of year 2011. Receiver noise temperatures have been measured to be ~453 K, close to original specifications, and the measured radiometric sensitivity agrees with the theoretical predictions for aperture synthesis imagers (2 K for a 40 ms integration time). The short term (few seconds) magnitude stability in the cross-correlations expressed as a fraction was measured to have a mean of 3.45×10-4 with a standard deviation of ~2.30×10-4, whilst the figure for the phase was found to have a mean of essentially zero with a standard deviation of 0.0181°. The susceptibility of the system to aliasing for point sources in the scene was examined and found to be well understood. The system was calibrated and security-relevant indoor near-field and out-door far-field imagery was created, at frame rates ranging from 1 to 200 frames per second. The results prove that an aperture synthesis imager can generate imagery in the near-field regime, successfully coping with the curved wave-fronts. The original objective of the project, to deliver a Technology Readiness Level (TRL) 4 laboratory demonstrator for aperture synthesis passive millimetre wave (PMMW) imaging, has been achieved. The project was co-funded by the Technology Strategy Board and the Royal Society of the United Kingdom.

  13. A Methodological Intercomparison of Topographic and Aerial Photographic Habitat Survey Techniques

    NASA Astrophysics Data System (ADS)

    Bangen, S. G.; Wheaton, J. M.; Bouwes, N.

    2011-12-01

    A severe decline in Columbia River salmonid populations and subsequent Federal listing of subpopulations has mandated both the monitoring of populations and evaluation of the status of available habitat. Numerous field and analytical methods exist to assist in the quantification of the abundance and quality of in-stream habitat for salmonids. These methods range from field 'stick and tape' surveys to spatially explicit topographic and aerial photographic surveys from a mix of ground-based and remotely sensed airborne platforms. Although several previous studies have assessed the quality of specific individual survey methods, the intercomparison of competing techniques across a diverse range of habitat conditions (wadeable headwater channels to non-wadeable mainstem channels) has not yet been elucidated. In this study, we seek to enumerate relative quality (i.e. accuracy, precision, extent) of habitat metrics and inventories derived from an array of ground-based and remotely sensed surveys of varying degrees of sophistication, as well as quantify the effort and cost in conducting the surveys. Over the summer of 2010, seven sample reaches of varying habitat complexity were surveyed in the Lemhi River Basin, Idaho, USA. Complete topographic surveys were attempted at each site using rtkGPS, total station, ground-based LiDaR and traditional airborne LiDaR. Separate high spatial resolution aerial imagery surveys were acquired using a tethered blimp, a drone UAV, and a traditional fixed-wing aircraft. Here we also developed a relatively simplistic methodology for deriving bathymetry from aerial imagery that could be readily employed by instream habitat monitoring programs. The quality of bathymetric maps derived from aerial imagery was compared with rtkGPS topographic data. The results are helpful for understanding the strengths and weaknesses of different approaches in specific conditions, and how a hybrid of data acquisition methods can be used to build a more complete

  14. Characterization of Shrubland-Atmosphere Interactions through Use of the Eddy Covariance Method, Distributed Footprint Sampling, and Imagery from Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Anderson, C.; Vivoni, E. R.; Pierini, N.; Robles-Morua, A.; Rango, A.; Laliberte, A.; Saripalli, S.

    2012-12-01

    Ecohydrological dynamics can be evaluated from field observations of land-atmosphere states and fluxes, including water, carbon, and energy exchanges measured through the eddy covariance method. In heterogeneous landscapes, the representativeness of these measurements is not well understood due to the variable nature of the sampling footprint and the mixture of underlying herbaceous, shrub, and soil patches. In this study, we integrate new field techniques to understand how ecosystem surface states are related to turbulent fluxes in two different semiarid shrubland settings in the Jornada (New Mexico) and Santa Rita (Arizona) Experimental Ranges. The two sites are characteristic of Chihuahuan (NM) and Sonoran (AZ) Desert mixed-shrub communities resulting from woody plant encroachment into grassland areas. In each study site, we deployed continuous soil moisture and soil temperature profile observations at twenty sites around an eddy covariance tower after local footprint estimation revealed the optimal sensor network design. We then characterized the tower footprint through terrain and vegetation analyses derived at high resolution (<1 m) from imagery obtained from a fixed-wing and rotary-wing Unmanned Aerial Vehicles (UAV). Our analysis focuses on the summertime land-atmosphere states and fluxes during which each ecosystem responded differentially to the North American monsoon. We found that vegetation heterogeneity induces spatial differences in soil moisture and temperature that are important to capture when relating these states to the eddy covariance flux measurements. Spatial distributions of surface states at different depths reveal intricate patterns linked to vegetation cover that vary between the two sites. Furthermore, single site measurements at the tower are insufficient to capture the footprint conditions and their influence on turbulent fluxes. We also discuss techniques for aggregating the surface states based upon the vegetation and soil

  15. Carbon Dynamics in Isolated Wetlands of the Northern Everglades Watershed is Revealed using Hydrogeophysical Methods and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    McClellan, M. D.; Job, M. J.; Comas, X.

    2016-12-01

    Peatlands play a critical role in the carbon (C) cycle by sequestering and storing a large fraction of the global soil C pool; and by producing and releasing significant amounts of greenhouse gasses (CO2, CH4) into the atmosphere. While most studies exploring these attributes have traditionally focused on boreal and subarctic biomes, wetlands in temperate and tropical climates (such as the Florida Everglades) have been understudied despite accounting for more than 20% of the global peatland C stock. We used a combination of indirect non-invasive geophysical methods (ground penetrating radar, GPR), aerial imagery, and direct measurements (gas traps) to estimate the contribution of subtropical isolated wetlands to the total C pool of the pine flatwoods landscape at the Disney Wilderness Preserve (DWP, Poinciana, FL). Measurements were collected within two types of isolated wetlands at the preserve, emergent and forested. Geophysical surveys were collected weekly to 1) define total peat thickness (i.e. from the surface to the mineral soil interface) and 2) estimate changes within the internal gas regime. Direct measurements of gas fluxes using gas traps and time-lapse cameras were used to estimate gas emissions (i.e. CH­4 and CO2). Aerial photographs were used to estimate surface area for each isolated wetland and develop a relationship between surface area and total wetland C production that is then applied to every isolated wetland in the preserve to estimate the total wetland C contribution. This work seeks to provide evidence that isolated wetlands within the central Florida landscape are key contributors of C to the atmosphere.

  16. Simulation of parafoil reconnaissance imagery

    NASA Astrophysics Data System (ADS)

    Kogler, Kent J.; Sutkus, Linas; Troast, Douglas; Kisatsky, Paul; Charles, Alain M.

    1995-08-01

    Reconnaissance from unmanned platforms is currently of interest to DoD and civil sectors concerned with drug trafficking and illegal immigration. Platforms employed vary from motorized aircraft to tethered balloons. One appraoch currently under evaluation deploys a TV camera suspended from a parafoil delivered to the area of interest by a cannon launched projectile. Imagery is then transmitted to a remote monitor for processing and interpretation. This paper presents results of imagery obtained from simulated parafoil flights in which software techniques were developed to process-in image degradation caused by atmospheric obscurants and perturbations in the normal parafoil flight trajectory induced by wind gusts. The approach to capturing continuous motion imagery from captive flight test recordings, the introduction of simulated effects, and the transfer of the processed imagery back to video tape is described.

  17. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  18. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  19. Real-time image processing for passive mmW imagery

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.

    2015-05-01

    The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.

  20. Preliminary assessment of aerial photography techniques for canvasback population analysis

    USGS Publications Warehouse

    Munro, R.E.; Trauger, D.L.

    1976-01-01

    Recent intensive research on the canvasback has focused attention on the need for more precise estimates of population parameters. During the 1972-75 period, various types of aerial photographing equipment were evaluated to determine the problems and potentials for employing these techniques in appraisals of canvasback populations. The equipment and procedures available for automated analysis of aerial photographic imagery were also investigated. Serious technical problems remain to be resolved, but some promising results were obtained. Final conclusions about the feasibility of operational implementation await a more rigorous analysis of the data collected.

  1. Flexibility Versus Expertise: A Closer Look at the Employment of United States Air Force Imagery Analysts

    DTIC Science & Technology

    2017-10-01

    significant pressure upon Air Force imagery analysts to exhibit expertise in multiple disciplines including full-motion video , electro-optical still...disciplines varies, but the greatest divergence is between full-motion video and all other forms of still imagery. This paper delves into three...motion video discipline were to be created. The research reveals several positive aspects of this course of action but precautions would be required

  2. Αutomated 2D shoreline detection from coastal video imagery: an example from the island of Crete

    NASA Astrophysics Data System (ADS)

    Velegrakis, A. F.; Trygonis, V.; Vousdoukas, M. I.; Ghionis, G.; Chatzipavlis, A.; Andreadis, O.; Psarros, F.; Hasiotis, Th.

    2015-06-01

    Beaches are both sensitive and critical coastal system components as they: (i) are vulnerable to coastal erosion (due to e.g. wave regime changes and the short- and long-term sea level rise) and (ii) form valuable ecosystems and economic resources. In order to identify/understand the current and future beach morphodynamics, effective monitoring of the beach spatial characteristics (e.g. the shoreline position) at adequate spatio-temporal resolutions is required. In this contribution we present the results of a new, fully-automated detection method of the (2-D) shoreline positions using high resolution video imaging from a Greek island beach (Ammoudara, Crete). A fully-automated feature detection method was developed/used to monitor the shoreline position in geo-rectified coastal imagery obtained through a video system set to collect 10 min videos every daylight hour with a sampling rate of 5 Hz, from which snapshot, time-averaged (TIMEX) and variance images (SIGMA) were generated. The developed coastal feature detector is based on a very fast algorithm using a localised kernel that progressively grows along the SIGMA or TIMEX digital image, following the maximum backscatter intensity along the feature of interest; the detector results were found to compare very well with those obtained from a semi-automated `manual' shoreline detection procedure. The automated procedure was tested on video imagery obtained from the eastern part of Ammoudara beach in two 5-day periods, a low wave energy period (6-10 April 2014) and a high wave energy period (1 -5 November 2014). The results showed that, during the high wave energy event, there have been much higher levels of shoreline variance which, however, appeared to be similarly unevenly distributed along the shoreline as that related to the low wave energy event, Shoreline variance `hot spots' were found to be related to the presence/architecture of an offshore submerged shallow beachrock reef, found at a distance of 50-80 m

  3. Video processing of remote sensor data applied to uranium exploration in Wyoming. [Roll-front U deposits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levinson, R.A.; Marrs, R.W.; Crockell, F.

    1979-06-30

    LANDSAT satellite imagery and aerial photography can be used to map areas of altered sandstone associated with roll-front uranium deposits. Image data must be enhanced so that alteration spectral contrasts can be seen, and video image processing is a fast, low-cost, and efficient tool. For LANDSAT data, the 7/4 ratio produces the best enhancement of altered sandstone. The 6/4 ratio is most effective for color infrared aerial photography. Geochemical and mineralogical associations occur in unaltered, altered, and ore roll-front zones. Samples from Pumpkin Buttes show that iron is the primary coloring agent which makes alteration visually detectable. Eh and pHmore » changes associated with passage of a roll front cause oxidation of magnetite and pyrite to hematite, goethite, and limonite in the host sandstone, thereby producing the alteration. Statistical analysis show that the detectability of geochemical and color zonation in host sands is weakened by soil-forming processes. Alteration can only be mapped in areas of thin soil cover and moderate to sparse vegetative cover.« less

  4. Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Hayden, David; Thompson, David R.; Castano, Rebecca

    2013-01-01

    Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and high-resolution, multispectral imaging devices easily produce data exceeding the available bandwidth. To address this situation computationally efficient algorithms were developed for analyzing science imagery onboard the spacecraft. These algorithms autonomously cluster the data into classes of similar imagery, enabling selective downlink of representatives of each class, and a map classifying the terrain imaged rather than the full dataset, reducing the volume of the downlinked data. A range of approaches was examined, including k-means clustering using image features based on color, texture, temporal, and spatial arrangement

  5. Video change detection for fixed wing UAVs

    NASA Astrophysics Data System (ADS)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the

  6. Multisensor monitoring of deforestation in the Guinea Highlands of West Africa

    NASA Technical Reports Server (NTRS)

    Gilruth, Peter T.; Hutchinson, Charles F.

    1990-01-01

    Multiple remote sensing systems were used to assess deforestation in the Guinea Highlands (Fouta Djallon) of West Africa. Sensor systems included: (1) historical (1953) and current (1989) aerial mapping photography; (2) current large-scale, small format (35mm) aerial photography; (3) current aerial video imagery; and (4) historical (1973) and recent (1985) LANDSAT MSS. Photographic and video data were manually interpreted and incorporated in a vector-based geographic information system (GIS). LANDSAT data were digitally classified. General results showed an increase in permanent and shifting agriculture over the past 35 years. This finding is consistent with hypothesized strategies to increase agricultural production through a shortening of the fallow period in areas of shifting cultivation. However, results also show that the total area of both permanent and shifting agriculture had expanded at the expense of natural vegetation and an increase in erosion. Although sequential LANDSAT MSS cannot be used in this region to accurately map land over, the location, direction and magnitude of changes can be detected in relative terms. Historical and current aerial photography can be used to map agricultural land use changes with some accuracy. Video imagery is useful as ancillary data for mapping vegetation. The most prudent approach to mapping deforestation would incorporate a multistage approach based on these sensors.

  7. An assessment of remote sensor imagery in the determination of housing quality data

    NASA Technical Reports Server (NTRS)

    Mallon, H. J.; Howard, J. Y.

    1971-01-01

    Selected census tracts in the metropolitan Washington area were examined using varying scales of aerial photography. Observable characteristics of housing and neighborhoods were assessed to determine feasibility of providing data on housing stock and quality and neighborhood condition from the imagery. Small scale imagery is shown to be of relatively marginal value in providing much of the data in the detail required, but can be useful for general survey purposes.

  8. Spatial Scale Gap Filling Using an Unmanned Aerial System: A Statistical Downscaling Method for Applications in Precision Agriculture.

    PubMed

    Hassan-Esfahani, Leila; Ebtehaj, Ardeshir M; Torres-Rua, Alfonso; McKee, Mac

    2017-09-14

    Applications of satellite-borne observations in precision agriculture (PA) are often limited due to the coarse spatial resolution of satellite imagery. This paper uses high-resolution airborne observations to increase the spatial resolution of satellite data for related applications in PA. A new variational downscaling scheme is presented that uses coincident aerial imagery products from "AggieAir", an unmanned aerial system, to increase the spatial resolution of Landsat satellite data. This approach is primarily tested for downscaling individual band Landsat images that can be used to derive normalized difference vegetation index (NDVI) and surface soil moisture (SSM). Quantitative and qualitative results demonstrate promising capabilities of the downscaling approach enabling effective increase of the spatial resolution of Landsat imageries by orders of 2 to 4. Specifically, the downscaling scheme retrieved the missing high-resolution feature of the imageries and reduced the root mean squared error by 15, 11, and 10 percent in visual, near infrared, and thermal infrared bands, respectively. This metric is reduced by 9% in the derived NDVI and remains negligibly for the soil moisture products.

  9. Spatial Scale Gap Filling Using an Unmanned Aerial System: A Statistical Downscaling Method for Applications in Precision Agriculture

    PubMed Central

    Hassan-Esfahani, Leila; Ebtehaj, Ardeshir M.; McKee, Mac

    2017-01-01

    Applications of satellite-borne observations in precision agriculture (PA) are often limited due to the coarse spatial resolution of satellite imagery. This paper uses high-resolution airborne observations to increase the spatial resolution of satellite data for related applications in PA. A new variational downscaling scheme is presented that uses coincident aerial imagery products from “AggieAir”, an unmanned aerial system, to increase the spatial resolution of Landsat satellite data. This approach is primarily tested for downscaling individual band Landsat images that can be used to derive normalized difference vegetation index (NDVI) and surface soil moisture (SSM). Quantitative and qualitative results demonstrate promising capabilities of the downscaling approach enabling effective increase of the spatial resolution of Landsat imageries by orders of 2 to 4. Specifically, the downscaling scheme retrieved the missing high-resolution feature of the imageries and reduced the root mean squared error by 15, 11, and 10 percent in visual, near infrared, and thermal infrared bands, respectively. This metric is reduced by 9% in the derived NDVI and remains negligibly for the soil moisture products. PMID:28906428

  10. Evaluation of ikonos satellite imagery for detecting ice storm damage to oak forests in Eastern Kentucky

    Treesearch

    W. Henry McNab; Tracy Roof

    2006-01-01

    Ice storms are a recurring landscape-scale disturbance in the eastern U.S. where they may cause varying levels of damage to upland hardwood forests. High-resolution Ikonos imagery and semiautomated detection of ice storm damage may be an alternative to manually interpreted aerial photography. We evaluated Ikonos multispectral, winter and summer imagery as a tool for...

  11. BOREAS Level-0 C-130 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Dominguez, Roseanne; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), C-130 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The NASA C-130 Earth Resources aircraft can accommodate two mapping cameras during flight, each of which can be fitted with 6- or 12-inch focal-length lenses and black-and-white, natural-color, or color-IR film, depending upon requirements. Both cameras were often in operation simultaneously, although sometimes only the lower resolution camera was deployed. When both cameras were in operation, the higher resolution camera was often used in a more limited fashion. The acquired photography covers the period of April to September 1994. The aerial photography was delivered as rolls of large format (9 x 9 inch) color transparency prints, with imagery from multiple missions (hundreds of prints) often contained within a single roll. A total of 1533 frames were collected from the C-130 platform for BOREAS in 1994. Note that the level-0 C-130 transparencies are not contained on the BOREAS CD-ROM set. An inventory file is supplied on the CD-ROM to inform users of all the data that were collected. Some photographic prints were made from the transparencies. In addition, BORIS staff digitized a subset of the tranparencies and stored the images in JPEG format. The CD-ROM set contains a small subset of the collected aerial photography that were the digitally scanned and stored as JPEG files for most tower and auxiliary sites in the NSA and SSA. See Section 15 for information about how to acquire additional imagery.

  12. Advanced Tie Feature Matching for the Registration of Mobile Mapping Imaging Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Peter, M.; Gerke, M.; Vosselman, G.

    2016-06-01

    Mobile Mapping's ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform's position. Consequently, acquired data products' positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images' geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability

  13. Prediction of senescent rangeland canopy structural attributes with airborne hyperspectral imagery

    USDA-ARS?s Scientific Manuscript database

    Canopy structural and chemical data are needed for senescent, mixed-grass prairie landscapes in autumn, yet models driven by image data are lacking for rangelands dominated by non-photosynthetically active vegetation (NPV). Here, we report how aerial hyperspectral imagery might be modeled to predic...

  14. Visualizing UAS-collected imagery using augmented reality

    NASA Astrophysics Data System (ADS)

    Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.

    2017-05-01

    One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.

  15. Digital reproduction of historical aerial photographic prints for preserving a deteriorating archive

    USGS Publications Warehouse

    Luman, D.E.; Stohr, C.; Hunt, L.

    1997-01-01

    Aerial photography from the 1920s and 1930s is a unique record of historical information used by government agencies, surveyors, consulting scientists and engineers, lawyers, and individuals for diverse purposes. Unfortunately, the use of the historical aerial photographic prints has resulted in their becoming worn, lost, and faded. Few negatives exist for the earliest photography. A pilot project demonstrated that high-quality, precision scanning of historical aerial photography is an appealing alternative to traditional methods for reproduction. Optimum sampling rate varies from photograph to photograph, ranging between 31 and 42 ??m/pixel for the USDA photographs tested. Inclusion of an index, such as a photomosaic or gazetteer, and ability to view the imagery promptly upon request are highly desirable.

  16. Investigation of methods and approaches for collecting and recording highway inventory data.

    DOT National Transportation Integrated Search

    2013-06-01

    Many techniques for collecting highway inventory data have been used by state and local agencies in the U.S. These : techniques include field inventory, photo/video log, integrated GPS/GIS mapping systems, aerial photography, satellite : imagery, vir...

  17. Monitoring black-tailed prairie dog colonies with high-resolution satellite imagery

    USGS Publications Warehouse

    Sidle, John G.; Johnson, D.H.; Euliss, B.R.; Tooze, M.

    2002-01-01

    The United States Fish and Wildlife Service has determined that the black-tailed prairie dog (Cynomys ludovicianus) warrants listing as a threatened species under the Endangered Species Act. Central to any conservation planning for the black-tailed prairie dog is an appropriate detection and monitoring technique. Because coarse-resolution satellite imagery is not adequate to detect black-tailed prairie dog colonies, we examined the usefulness of recently available high-resolution (1-m) satellite imagery. In 6 purchased scenes of national grasslands, we were easily able to visually detect small and large colonies without using image-processing algorithms. The Ikonos (Space Imaging(tm)) satellite imagery was as adequate as large-scale aerial photography to delineate colonies. Based on the high quality of imagery, we discuss a possible monitoring program for black-tailed prairie dog colonies throughout the Great Plains, using the species' distribution in North Dakota as an example. Monitoring plots could be established and imagery acquired periodically to track the expansion and contraction of colonies.

  18. Forestry, geology and hydrological investigations from ERTS-1 imagery in two areas of Ecuador, South America

    NASA Technical Reports Server (NTRS)

    Moreno, N. V. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. In the Oriente area, well-drained forests containing commercially valuable hardwoods can be recognized confidently and delineated quickly on the ERTS imagery. In the tropical rainforest, ERTS can provide an abundance of inferential information about large scale geologic structures. ERTS imagery is better than normal aerial photography for recognizing linears. The imagery is particularly useful for updating maps of the distributary system of the Guagas River Basin and of any other river with a similarly rapid changing channel pattern.

  19. The effects of 5.1 sound presentations on the perception of stereoscopic imagery in video games

    NASA Astrophysics Data System (ADS)

    Cullen, Brian; Galperin, Daniel; Collins, Karen; Hogue, Andrew; Kapralos, Bill

    2013-03-01

    Stereoscopic 3D (S3D) content in games, film and other audio-visual media has been steadily increasing over the past number of years. However, there are still open, fundamental questions regarding its implementation, particularly as it relates to a multi-modal experience that involves sound and haptics. Research has shown that sound has considerable impact on our perception of 2D phenomena, but very little research has considered how sound may influence stereoscopic 3D. Here we present the results of an experiment that examined the effects of 5.1 surround sound (5.1) and stereo loudspeaker setups on depth perception in relation to S3D imagery within a video game environment. Our aim was to answer the question: "can 5.1 surround sound enhance the participant's perception of depth in the stereoscopic field when compared to traditional stereo sound presentations?" In addition, our study examined how the presence or absence of Doppler frequency shift and frequency fall-off audio effects can also influence depth judgment under these conditions. Results suggest that 5.1 surround sound presentations enhance the apparent depth of stereoscopic imagery when compared to stereo presentations. Results also suggest that the addition of audio effects such as Doppler shift and frequency fall-off filters can influence the apparent depth of S3D objects.

  20. Integrating multisource imagery and GIS analysis for mapping Bermuda`s benthic habitats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vierros, M.K.

    1997-06-01

    Bermuda is a group of isolated oceanic situated in the northwest Atlantic Ocean and surrounded by the Sargasso Sea. Bermuda possesses the northernmost coral reefs and mangroves in the Atlantic Ocean, and because of its high population density, both the terrestrial and marine environments are under intense human pressure. Although a long record of scientific research exists, this study is the first attempt to comprehensively map the area`s benthic habitats, despite the need for such a map for resource assessment and management purposes. Multi-source and multi-date imagery were used for producing the habitat map due to lack of a completemore » up-to-date image. Classifications were performed with SPOT data, and the results verified from recent aerial photography and current aerial video, along with extensive ground truthing. Stratification of the image into regions prior to classification reduced the confusing effects of varying water depth. Classification accuracy in shallow areas was increased by derivation of a texture pseudo-channel, while bathymetry was used as a classification tool in deeper areas, where local patterns of zonation were well known. Because of seasonal variation in extent of seagrasses, a classification scheme based on density could not be used. Instead, a set of classes based on the seagrass area`s exposure to the open ocean were developed. The resulting habitat map is currently being assessed for accuracy with promising preliminary results, indicating its usefulness as a basis for future resource assessment studies.« less

  1. The availability of local aerial photography in southern California. [for solution of urban planning problems

    NASA Technical Reports Server (NTRS)

    Allen, W., III; Sledge, B.; Paul, C. K.; Landini, A. J.

    1974-01-01

    Some of the major photography and photogrammetric suppliers and users located in Southern California are listed. Recent trends in aerial photographic coverage of the Los Angeles basin area are also noted, as well as the uses of that imagery.

  2. Comparison of aerial imagery from manned and unmanned aircraft platforms for monitoring cotton growth

    USDA-ARS?s Scientific Manuscript database

    Unmanned aircraft systems (UAS) have emerged as a low-cost and versatile remote sensing platform in recent years, but little work has been done on comparing imagery from manned and unmanned platforms for crop assessment. The objective of this study was to compare imagery taken from multiple cameras ...

  3. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    PubMed Central

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-01-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management. PMID:28338047

  4. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    NASA Astrophysics Data System (ADS)

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-03-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95-98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management.

  5. Processing of SeaMARC swath sonar imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratson, L.; Malinverno, A.; Edwards, M.

    1990-05-01

    Side-scan swath sonar systems have become an increasingly important means of mapping the sea floor. Two such systems are the deep-towed, high-resolution SeaMARC I sonar, which has a variable swath width of up to 5 km, and the shallow-towed, lower-resolution SeaMARC II sonar, which has a swath width of 10 km. The sea-floor imagery of acoustic backscatter output by the SeaMARC sonars is analogous to aerial photographs and airborne side-looking radar images of continental topography. Geologic interpretation of the sea-floor imagery is greatly facilitated by image processing. Image processing of the digital backscatter data involves removal of noise by medianmore » filtering, spatial filtering to remove sonar scans of anomalous intensity, across-track corrections to remove beam patterns caused by nonuniform response of the sonar transducers to changes in incident angle, and contrast enhancement by histogram equalization to maximize the available dynamic range. Correct geologic interpretation requires submarine structural fabrics to be displayed in their proper locations and orientations. Geographic projection of sea-floor imagery is achieved by merging the enhanced imagery with the sonar vehicle navigation and correcting for vehicle attitude. Co-registration of bathymetry with sonar imagery introduces sea-floor relief and permits the imagery to be displayed in three-dimensional perspectives, furthering the ability of the marine geologist to infer the processes shaping formerly hidden subsea terrains.« less

  6. A Vegetation Analysis on Horn Island Mississippi, ca. 1940 using Habitat Characteristic Dimensions Derived from Historical Aerial Photography

    NASA Astrophysics Data System (ADS)

    Jeter, G. W.; Carter, G. A.

    2013-12-01

    Guy (Will) Wilburn Jeter Jr., Gregory A. Carter University of Southern Mississippi Geography and Geology Gulf Coast Geospatial Center The over-arching goal of this research is to assess habitat change over a seventy year period to better understand the combined effects of global sea level rise and storm impacts on the stability of Horn Island, MS habitats. Historical aerial photography is often overlooked as a resource for use in determining habitat change. However, the spatial information provided even by black and white imagery can give insight into past habitat composition via textural analysis. This research will evaluate characteristic dimensions; most notably patch size of habitat types using simple geo-statistics and textures of brightness values of historical aerial imagery. It is assumed that each cover type has an identifiable patch size that can be used as a unique classifier of each habitat type. Analytical methods applied to the 1940 imagery were developed using 2010 field data and USDA aerial imagery. Textural moving window methods and basic geo-statistics were used to estimate characteristic dimensions of each cover type in 1940 aerial photography. The moving window texture analysis was configured with multiple window sizes to capture the characteristic dimensions of six habitat types; water, bare sand , dune herb land, estuarine shrub land, marsh land and slash pine woodland. Coefficient of variation (CV), contrast, and entropy texture filters were used to analyze the spatial variability of the 1940 and 2010 imagery. (CV) was used to depict the horizontal variability of each habitat characteristic dimension. Contrast was used to represent the variability of bright versus dark pixel values; entropy was used to show the variation in the slash pine woodland habitat type. Results indicate a substantial increase in marshland habitat relative to other habitat types since 1940. Results also reveal each habitat-type, such as dune herb-land, marsh

  7. Search and Pursuit with Unmanned Aerial Vehicles in Road Networks

    DTIC Science & Technology

    2013-11-01

    production volume in each area for use in consumer electronics. Simultaneously, a shift in defense strategy towards unmanned vehicles, particularly...Vöcking. Randomized pursuit-evasion in graphs. Combinatorics, Probability and Computing, 12:225–244, May 2003. [3] AeroVironment Inc. Raven Product Data...Ali and Mubarak Shah. COCOA - tracking in aerial imagery. In SPIE Airborne Intelligence, Surveillance, Reconnaissance Systems and Applications, 2006

  8. Ortho-Rectification of Narrow Band Multi-Spectral Imagery Assisted by Dslr RGB Imagery Acquired by a Fixed-Wing Uas

    NASA Astrophysics Data System (ADS)

    Rau, J.-Y.; Jhan, J.-P.; Huang, C.-Y.

    2015-08-01

    Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a modified projective transformation (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at

  9. Real-time people and vehicle detection from UAV imagery

    NASA Astrophysics Data System (ADS)

    Gaszczak, Anna; Breckon, Toby P.; Han, Jiwan

    2011-01-01

    A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.

  10. Multistage, Multiband and sequential imagery to identify and quantify non-forest vegetation resources

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.

    1971-01-01

    Analysis and recognition processing of multispectral scanner imagery for plant community classification and interpretations of various film-filter-scale aerial photographs are reported. Data analyses and manuscript preparation of research on microdensitometry for plant community and component identification and remote estimates of biomass are included.

  11. Dreams and Mediation in Music Video.

    ERIC Educational Resources Information Center

    Burns, Gary

    The most extensive use of dream imagery in popular culture occurs in the visual arts, and in the past five years it has become evident that music video (a semi-narrative hybrid of film and television) is the most dreamlike media product of all. The rampant depiction and implication of dreams and media fantasies in music video are often strongly…

  12. Runway Detection From Map, Video and Aircraft Navigational Data

    DTIC Science & Technology

    2016-03-01

    FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA by Jose R. Espinosa Gloria March 2016 Thesis Advisor: Roberto Cristi Co-Advisor: Oleg...COVERED Master’s thesis 4. TITLE AND SUBTITLE RUNWAY DETECTION FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA 5. FUNDING NUMBERS 6. AUTHOR...Mexican Navy, unmanned aerial vehicles (UAV) have been equipped with daylight and infrared cameras. Processing the video information obtained from these

  13. Applicability of ERTS-1 imagery to the study of suspended sediment and aquatic fronts

    NASA Technical Reports Server (NTRS)

    Klemas, V.; Srna, R.; Treasure, W.; Otley, M.

    1973-01-01

    Imagery from three successful ERTS-1 passes over the Delaware Bay and Atlantic Coastal Region have been evaluated to determine visibility of aquatic features. Data gathered from ground truth teams before and during the overflights, in conjunction with aerial photographs taken at various altitudes, were used to interpret the imagery. The overpasses took place on August 16, October 10, 1972, and January 26, 1973, with cloud cover ranging from about zero to twenty percent. (I.D. Nos. 1024-15073, 1079-15133, and 1187-15140). Visual inspection, density slicing and multispectral analysis of the imagery revealed strong suspended sediment patterns and several distinct types of aquatic interfaces or frontal systems.

  14. Overall evaluation of LANDSAT (ERTS) follow on imagery for cartographic application

    NASA Technical Reports Server (NTRS)

    Colvocoresses, A. P. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. LANDSAT imagery can be operationally applied to the revision of nautical charts. The imagery depicts shallow seas in a form that permits accurate planimetric image mapping of features to 20 meters of depth where the conditions of water clarity and bottom reflection are suitable. LANDSAT data also provide an excellent simulation of the earth's surface, for such applications as aeronautical charting and radar image correlation in aircraft and aircraft simulators. Radiometric enhancement, particularly edge enhancement, a technique only marginally successful with aerial photographs has proved to be high value when applied to LANDSAT data.

  15. Cultural Artifact Detection in Long Wave Infrared Imagery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dylan Zachary; Craven, Julia M.; Ramon, Eric

    2017-01-01

    Detection of cultural artifacts from airborne remotely sensed data is an important task in the context of on-site inspections. Airborne artifact detection can reduce the size of the search area the ground based inspection team must visit, thereby improving the efficiency of the inspection process. This report details two algorithms for detection of cultural artifacts in aerial long wave infrared imagery. The first algorithm creates an explicit model for cultural artifacts, and finds data that fits the model. The second algorithm creates a model of the background and finds data that does not fit the model. Both algorithms are appliedmore » to orthomosaic imagery generated as part of the MSFE13 data collection campaign under the spectral technology evaluation project.« less

  16. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  17. Information fusion performance evaluation for motion imagery data using mutual information: initial study

    NASA Astrophysics Data System (ADS)

    Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik

    2015-06-01

    As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.

  18. Digital Video Over Space Systems and Networks

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2010-01-01

    This slide presentation reviews the use of digital video with space systems and networks. The earliest use of video was the use of film precluding live viewing, which gave way to live television from space. This has given way to digital video using internet protocol for transmission. This has provided for many improvements with new challenges. Some of these ehallenges are reviewed. The change to digital video transmitted over space systems can provide incredible imagery, however the process must be viewed as an entire system, rather than piece-meal.

  19. Emergency Response Imagery Related to Hurricanes Harvey, Irma, and Maria

    NASA Astrophysics Data System (ADS)

    Worthem, A. V.; Madore, B.; Imahori, G.; Woolard, J.; Sellars, J.; Halbach, A.; Helmricks, D.; Quarrick, J.

    2017-12-01

    NOAA's National Geodetic Survey (NGS) and Remote Sensing Division acquired and rapidly disseminated emergency response imagery related to the three recent hurricanes Harvey, Irma, and Maria. Aerial imagery was collected using a Trimble Digital Sensor System, a high-resolution digital camera, by means of NOAA's King Air 350ER and DeHavilland Twin Otter (DHC-6) Aircraft. The emergency response images are used to assess the before and after effects of the hurricanes' damage. The imagery aids emergency responders, such as FEMA, Coast Guard, and other state and local governments, in developing recovery strategies and efforts by prioritizing areas most affected and distributing appropriate resources. Collected imagery is also used to provide damage assessment for use in long-term recovery and rebuilding efforts. Additionally, the imagery allows for those evacuated persons to see images of their homes and neighborhoods remotely. Each of the individual images are processed through ortho-rectification and merged into a uniform mosaic image. These remotely sensed datasets are publically available, and often used by web-based map servers as well as, federal, state, and local government agencies. This poster will show the imagery collected for these three hurricanes and the processes involved in getting data quickly into the hands of those that need it most.

  20. Development and Demonstration of an Aerial Imagery Assessment Method to Monitor Changes in Restored Stream Condition

    NASA Astrophysics Data System (ADS)

    Fong, L. S.; Ambrose, R. F.

    2017-12-01

    Remote sensing is an excellent way to assess the changing condition of streams and wetlands. Several studies have measured large-scale changes in riparian condition indicators, but few have remotely applied multi-metric assessments on a finer scale to measure changes, such as those caused by restoration, in the condition of small riparian areas. We developed an aerial imagery assessment method (AIAM) that combines landscape, hydrology, and vegetation observations into one index describing overall ecological condition of non-confined streams. Verification of AIAM demonstrated that sites in good condition (as assessed on-site by the California Rapid Assessment Method) received high AIAM scores. (AIAM was not verified with poor condition sites.) Spearman rank correlation tests comparing AIAM and the field-based California Rapid Assessment Method (CRAM) results revealed that some components of the two methods were highly correlated. The application of AIAM is illustrated with time-series restoration trajectories of three southern California stream restoration projects aged 15 to 21 years. The trajectories indicate that the projects improved in condition in years following their restoration, with vegetation showing the most dynamic change over time. AIAM restoration trajectories also overlapped to different degrees with CRAM chronosequence restoration performance curves that demonstrate the hypothetical development of high-performing projects. AIAM has high potential as a remote ecological assessment method and effective tool to determine restoration trajectories. Ultimately, this tool could be used to further improve stream and wetland restoration management.

  1. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    PubMed

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. The use of historical imagery in the remediation of an urban hazardous waste site

    USGS Publications Warehouse

    Slonecker, E.T.

    2011-01-01

    The information derived from the interpretation of historical aerial photographs is perhaps the most basic multitemporal application of remote-sensing data. Aerial photographs dating back to the early 20th century can be extremely valuable sources of historical landscape activity. In this application, imagery from 1918 to 1927 provided a wealth of information about chemical weapons testing, storage, handling, and disposal of these hazardous materials. When analyzed by a trained photo-analyst, the 1918 aerial photographs resulted in 42 features of potential interest. When compared with current remedial activities and known areas of contamination, 33 of 42 or 78.5% of the features were spatially correlated with areas of known contamination or other remedial hazardous waste cleanup activity.

  3. Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods

    NASA Astrophysics Data System (ADS)

    Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.

    2008-12-01

    Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.

  4. Surf zone characterization from Unmanned Aerial Vehicle imagery

    NASA Astrophysics Data System (ADS)

    Holman, Rob A.; Holland, K. Todd; Lalejini, Dave M.; Spansel, Steven D.

    2011-11-01

    We investigate the issues and methods for estimating nearshore bathymetry based on wave celerity measurements obtained using time series imagery from small unmanned aircraft systems (SUAS). In contrast to time series imagery from fixed cameras or from larger aircraft, SUAS data are usually short, gappy in time, and unsteady in aim in high frequency ways that are not reflected by the filtered navigation metadata. These issues were first investigated using fixed camera proxy data that have been intentionally degraded to mimic these problems. It has been found that records as short as 50 s or less can yield good bathymetry results. Gaps in records associated with inadvertent look-away during unsteady flight would normally prevent use of the required standard Fast Fourier Transform methods. However, we found that a full Fourier Transform could be implemented on the remaining valid record segments and was effective if at least 50% of total record length remained intact. Errors in image geo-navigation were stabilized based on fixed ground fiducials within a required land portion of the image. The elements of a future method that could remove this requirement were then outlined. Two test SUAS data runs were analyzed and compared to survey ground truth data. A 54-s data run at Eglin Air Force Base on the Gulf of Mexico yielded a good bathymetry product that compared well with survey data (standard deviation of 0.51 m in depths ranging from 0 to 4 m). A shorter (30.5 s) record from Silver Strand Beach (near Coronado) on the US west coast provided a good approximation of the surveyed bathymetry but was excessively deep offshore and had larger errors (1.19 m for true depths ranging from 0 to 6 m), consistent with the short record length. Seventy-three percent of the bathymetry estimates lay within 1 m of the truth for most of the nearshore.

  5. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  6. A qualitative evaluation of Landsat imagery of Australian rangelands

    USGS Publications Warehouse

    Graetz, R.D.; Carneggie, David M.; Hacker, R.; Lendon, C.; Wilcox, D.G.

    1976-01-01

    The capability of multidate, multispectral ERTS-1 imagery of three different rangeland areas within Australia was evaluated for its usefulness in preparing inventories of rangeland types, assessing on a broad scale range condition within these rangeland types, and assessing the response of rangelands to rainfall events over large areas. For the three divergent rangeland test areas, centered on Broken W, Alice Springs and Kalgoorlie, detailed interpretation of the imagery only partially satisfied the information requirements set. It was most useful in the Broken Hill area where fenceline contrasts in range condition were readily visible. At this and the other sites an overstorey of trees made interpretation difficult. Whilst the low resolution characteristics and the lack of stereoscopic coverage hindered interpretation it was felt that this type of imagery with its vast coverage, present low cost and potential for repeated sampling is a useful addition to conventional aerial photography for all rangeland types.

  7. Regional snow-avalanche detection using object-based image analysis of near-infrared aerial imagery

    NASA Astrophysics Data System (ADS)

    Korzeniowska, Karolina; Bühler, Yves; Marty, Mauro; Korup, Oliver

    2017-10-01

    Snow avalanches are destructive mass movements in mountain regions that continue to claim lives and cause infrastructural damage and traffic detours. Given that avalanches often occur in remote and poorly accessible steep terrain, their detection and mapping is extensive and time consuming. Nonetheless, systematic avalanche detection over large areas could help to generate more complete and up-to-date inventories (cadastres) necessary for validating avalanche forecasting and hazard mapping. In this study, we focused on automatically detecting avalanches and classifying them into release zones, tracks, and run-out zones based on 0.25 m near-infrared (NIR) ADS80-SH92 aerial imagery using an object-based image analysis (OBIA) approach. Our algorithm takes into account the brightness, the normalised difference vegetation index (NDVI), the normalised difference water index (NDWI), and its standard deviation (SDNDWI) to distinguish avalanches from other land-surface elements. Using normalised parameters allows applying this method across large areas. We trained the method by analysing the properties of snow avalanches at three 4 km-2 areas near Davos, Switzerland. We compared the results with manually mapped avalanche polygons and obtained a user's accuracy of > 0.9 and a Cohen's kappa of 0.79-0.85. Testing the method for a larger area of 226.3 km-2, we estimated producer's and user's accuracies of 0.61 and 0.78, respectively, with a Cohen's kappa of 0.67. Detected avalanches that overlapped with reference data by > 80 % occurred randomly throughout the testing area, showing that our method avoids overfitting. Our method has potential for large-scale avalanche mapping, although further investigations into other regions are desirable to verify the robustness of our selected thresholds and the transferability of the method.

  8. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  9. Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views

    DTIC Science & Technology

    2014-11-10

    collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video

  10. 3D reconstruction optimization using imagery captured by unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Bassie, Abby L.; Meacham, Sean; Young, David; Turnage, Gray; Moorhead, Robert J.

    2017-05-01

    Because unmanned air vehicles (UAVs) are emerging as an indispensable image acquisition platform in precision agriculture, it is vitally important that researchers understand how to optimize UAV camera payloads for analysis of surveyed areas. In this study, imagery captured by a Nikon RGB camera attached to a Precision Hawk Lancaster was used to survey an agricultural field from six different altitudes ranging from 45.72 m (150 ft.) to 121.92 m (400 ft.). After collecting imagery, two different software packages (MeshLab and AgiSoft) were used to measure predetermined reference objects within six three-dimensional (3-D) point clouds (one per altitude scenario). In-silico measurements were then compared to actual reference object measurements, as recorded with a tape measure. Deviations of in-silico measurements from actual measurements were recorded as Δx, Δy, and Δz. The average measurement deviation in each coordinate direction was then calculated for each of the six flight scenarios. Results from MeshLab vs. AgiSoft offered insight into the effectiveness of GPS-defined point cloud scaling in comparison to user-defined point cloud scaling. In three of the six flight scenarios flown, MeshLab's 3D imaging software (user-defined scale) was able to measure object dimensions from 50.8 to 76.2 cm (20-30 inches) with greater than 93% accuracy. The largest average deviation in any flight scenario from actual measurements was 14.77 cm (5.82 in.). Analysis of the point clouds in AgiSoft (GPS-defined scale) yielded even smaller Δx, Δy, and Δz than the MeshLab measurements in over 75% of the flight scenarios. The precisions of these results are satisfactory in a wide variety of precision agriculture applications focused on differentiating and identifying objects using remote imagery.

  11. Gypsy moth defoliation assessment: Forest defoliation in detectable from satellite imagery. [New England, New York, Pennsylvania, and New Jersey

    NASA Technical Reports Server (NTRS)

    Moore, H. J. (Principal Investigator); Rohde, W. G.

    1975-01-01

    The author has identified the following significant results. ERTS-1 imagery obtained over eastern Pennsylvania during July 1973, indicates that forest defoliation is detectable from satellite imagery and correlates well with aerial visual survey data. It now appears that two damage classes (heavy and moderate-light) and areas of no visible defoliation can be detected and mapped from properly prepared false composite imagery. In areas where maple is the dominant species or in areas of small woodlots interspersed with agricultural areas, detection and subsequent mapping is more difficult.

  12. Extraction of Dems and Orthoimages from Archive Aerial Imagery to Support Project Planning in Civil Engineering

    NASA Astrophysics Data System (ADS)

    Cogliati, M.; Tonelli, E.; Battaglia, D.; Scaioni, M.

    2017-12-01

    Archive aerial photos represent a valuable heritage to provide information about land content and topography in the past years. Today, the availability of low-cost and open-source solutions for photogrammetric processing of close-range and drone images offers the chance to provide outputs such as DEM's and orthoimages in easy way. This paper is aimed at demonstrating somehow and to which level of accuracy digitized archive aerial photos may be used within a such kind of low-cost software (Agisoft Photoscan Professional®) to generate photogrammetric outputs. Different steps of the photogrammetric processing workflow are presented and discussed. The main conclusion is that this procedure may come to provide some final products, which however do not feature the high accuracy and resolution that may be obtained using high-end photogrammetric software packages specifically designed for aerial survey projects. In the last part a case study is presented about the use of four-epoch archive of aerial images to analyze the area where a tunnel has to be excavated.

  13. Feasibility study for automatic reduction of phase change imagery

    NASA Technical Reports Server (NTRS)

    Nossaman, G. O.

    1971-01-01

    The feasibility of automatically reducing a form of pictorial aerodynamic heating data is discussed. The imagery, depicting the melting history of a thin coat of fusible temperature indicator painted on an aerodynamically heated model, was previously reduced by manual methods. Careful examination of various lighting theories and approaches led to an experimentally verified illumination concept capable of yielding high-quality imagery. Both digital and video image processing techniques were applied to reduction of the data, and it was demonstrated that either method can be used to develop superimposed contours. Mathematical techniques were developed to find the model-to-image and the inverse image-to-model transformation using six conjugate points, and methods were developed using these transformations to determine heating rates on the model surface. A video system was designed which is able to reduce the imagery rapidly, economically and accurately. Costs for this system were estimated. A study plan was outlined whereby the mathematical transformation techniques developed to produce model coordinate heating data could be applied to operational software, and methods were discussed and costs estimated for obtaining the digital information necessary for this software.

  14. The use of historical imagery in the remediation of an urban hazardous waste site

    USGS Publications Warehouse

    Slonecker, E.T.

    2011-01-01

    The information derived from the interpretation of historical aerial photographs is perhaps the most basic multitemporal application of remote-sensing data. Aerial photographs dating back to the early 20th century can be extremely valuable sources of historical landscape activity. In this application, imagery from 1918 to 1927 provided a wealth of information about chemical weapons testing, storage, handling, and disposal of these hazardous materials. When analyzed by a trained photo-analyst, the 1918 aerial photographs resulted in 42 features of potential interest. When compared with current remedial activities and known areas of contamination, 33 of 42 or 78.5% of the features were spatially correlated with areas of known contamination or other remedial hazardous waste cleanup activity. ?? 2010 IEEE.

  15. Photocopy of aerial photograph, Pacific Air Industries, Flight 123V, June ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of aerial photograph, Pacific Air Industries, Flight 123V, June 29, 1960 (University of California, Santa Barbara, Map and Imagery Collection) PORTION OF IRVINE RANCH SHOWING SITE CA-2275-A IN LOWER LEFT QUADRANT AND SITE CA-2275-B IN UPPER RIGHT QUADRANT (see separate photograph index for 2275-B) - Irvine Ranch Agricultural Headquarters, Carillo Tenant House, Southwest of Intersection of San Diego & Santa Ana Freeways, Irvine, Orange County, CA

  16. Aerial thermography studies of power plant heated lakes

    NASA Astrophysics Data System (ADS)

    Villa-Aleman, Eliel; Garrett, Alfred J.; Kurzeja, Robert J.; Pendergast, Malcolm M.

    2000-03-01

    Remote sensing temperature measurements of water bodies is complicated by the temperature differences between the true surface or `skin' water and the bulk water below. Weather conditions control the reduction of the skin temperature relative to the bulk water temperature. Typical skin temperature depressions range from a few tenths of a degree Celsius to more than one degree. In this research project, the Savannah River Technology Center used aerial thermography and surface-based meteorological and water temperature measurements to study a power plant cooling lake in South Carolina. Skin and bulk water temperatures were measured simultaneously for imagery calibration and to product a database for modeling of skin temperature depressions as a function of weather and bulk water temperatures. This paper will present imagery that illustrates how the skin temperature depression was affected by different conditions in several locations on the lake and will present skin temperature modeling results.

  17. Detecting Waste Tire Sites Using Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Quinlan, B.; Huybrechts, C.; Schmidt, C.; Skiles, J. W.

    2005-12-01

    Waste tire piles pose environmental threats in the form of toxic fires and potential insect habitat. Previous techniques used to locate tire piles have included California Highway Patrol aerial surveillance and location tips from stakeholders. The TIRe (Tire Identification from Reflectance) model was developed as part of a pilot-project funded by the California Integrated Waste Management Board (CIWMB), a division of the California Environmental Protection Agency, and executed at NASA Ames Research Center's DEVELOP Program during the summer of 2005. The goal of the pilot-project was to determine if high-resolution satellite imagery could be used to locate waste tire disposal sites. The TIRe model, built in Leica Geosystems' ERDAS Imagine Model Builder, was created to automate the process of isolating tires in satellite imagery in two land cover types found in California. The sole geospatial data input to the TIRe model was Space Imaging IKONOS imagery. Once the imagery was processed through the TIRe model, less than 1% of the original image remained, consisting only of dark pixels containing tires or spectrally similar features. The output, a binary image was overlain on top of the original image for visual interpretation. The TIRe model was successfully able to identify waste tire piles as small as 400 tires and will prove to be a valuable tool for the detection, monitoring and remediation of waste tire sites.

  18. The application of ERTS imagery to mapping snow cover in the western United States. [Salt Verde in Arizona and Sierra Nevada California

    NASA Technical Reports Server (NTRS)

    Barnes, J. C. (Principal Investigator); Bowley, C. J.; Simmes, D. A.

    1974-01-01

    The author has identified the following significant results. In much of the western United States a large part of the utilized water comes from accumulated mountain snowpacks; thus, accurate measurements of snow distributions are required for input to streamflow prediction models. The application of ERTS-1 imagery for mapping snow has been evaluated for two geographic areas, the Salt-Verde watershed in central Arizona and the southern Sierra Nevada in California. Techniques have been developed to identify snow and to differentiate between snow and cloud. The snow extent for these two drainage areas has been mapped from the MSS-5 (0.6 - 0.7 microns) imagery and compared with aerial survey snow charts, aircraft photography, and ground-based snow measurements. The results indicate that ERTS imagery has substantial practical applications for snow mapping. Snow extent can be mapped from ERTS-1 imagery in more detail than is depicted on aerial survey snow charts. Moreover, in Arizona and southern California cloud obscuration does not appear to be a serious deterrent to the use of satellite data for snow survey. The costs involved in deriving snow maps from ERTS-1 imagery appear to be very reasonable in comparison with existing data collection methods.

  19. Modeling vegetation heights from high resolution stereo aerial photography: an application for broad-scale rangeland monitoring

    USGS Publications Warehouse

    Gillan, Jeffrey K.; Karl, Jason W.; Duniway, Michael; Elaksher, Ahmed

    2014-01-01

    Vertical vegetation structure in rangeland ecosystems can be a valuable indicator for assessing rangeland health and monitoring riparian areas, post-fire recovery, available forage for livestock, and wildlife habitat. Federal land management agencies are directed to monitor and manage rangelands at landscapes scales, but traditional field methods for measuring vegetation heights are often too costly and time consuming to apply at these broad scales. Most emerging remote sensing techniques capable of measuring surface and vegetation height (e.g., LiDAR or synthetic aperture radar) are often too expensive, and require specialized sensors. An alternative remote sensing approach that is potentially more practical for managers is to measure vegetation heights from digital stereo aerial photographs. As aerial photography is already commonly used for rangeland monitoring, acquiring it in stereo enables three-dimensional modeling and estimation of vegetation height. The purpose of this study was to test the feasibility and accuracy of estimating shrub heights from high-resolution (HR, 3-cm ground sampling distance) digital stereo-pair aerial images. Overlapping HR imagery was taken in March 2009 near Lake Mead, Nevada and 5-cm resolution digital surface models (DSMs) were created by photogrammetric methods (aerial triangulation, digital image matching) for twenty-six test plots. We compared the heights of individual shrubs and plot averages derived from the DSMs to field measurements. We found strong positive correlations between field and image measurements for several metrics. Individual shrub heights tended to be underestimated in the imagery, however, accuracy was higher for dense, compact shrubs compared with shrubs with thin branches. Plot averages of shrub height from DSMs were also strongly correlated to field measurements but consistently underestimated. Grasses and forbs were generally too small to be detected with the resolution of the DSMs. Estimates of

  20. Mountain pine beetle detection and monitoring: evaluation of airborne imagery

    NASA Astrophysics Data System (ADS)

    Roberts, A.; Bone, C.; Dragicevic, S.; Ettya, A.; Northrup, J.; Reich, R.

    2007-10-01

    The processing and evaluation of digital airborne imagery for detection, monitoring and modeling of mountain pine beetle (MPB) infestations is evaluated. The most efficient and reliable remote sensing strategy for identification and mapping of infestation stages ("current" to "red" to "grey" attack) of MPB in lodgepole pine forests is determined for the most practical and cost effective procedures. This research was planned to specifically enhance knowledge by determining the remote sensing imaging systems and analytical procedures that optimize resource management for this critical forest health problem. Within the context of this study, airborne remote sensing of forest environments for forest health determinations (MPB) is most suitably undertaken using multispectral digitally converted imagery (aerial photography) at scales of 1:8000 for early detection of current MPB attack and 1:16000 for mapping and sequential monitoring of red and grey attack. Digital conversion should be undertaken at 10 to 16 microns for B&W multispectral imagery and 16 to 24 microns for colour and colour infrared imagery. From an "operational" perspective, the use of twin mapping-cameras with colour and B&W or colour infrared film will provide the best approximation of multispectral digital imagery with near comparable performance in a competitive private sector context (open bidding).

  1. Evaluation of experimental UAV video change detection

    NASA Astrophysics Data System (ADS)

    Bartelsen, J.; Saur, G.; Teutsch, C.

    2016-10-01

    During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect

  2. Automatic mission planning algorithms for aerial collection of imaging-specific tasks

    NASA Astrophysics Data System (ADS)

    Sponagle, Paul; Salvaggio, Carl

    2017-05-01

    The rapid advancement and availability of small unmanned aircraft systems (sUAS) has led to many novel exploitation tasks utilizing that utilize this unique aerial imagery data. Collection of this unique data requires novel flight planning to accomplish the task at hand. This work describes novel flight planning to better support structure-from-motion missions to minimize occlusions, autonomous and periodic overflight of reflectance calibration panels to permit more efficient and accurate data collection under varying illumination conditions, and the collection of imagery data to study optical properties such as the bidirectional reflectance distribution function without disturbing the target in sensitive or remote areas of interest. These novel mission planning algorithms will provide scientists with additional tools to meet their future data collection needs.

  3. Effects of video compression on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Cha, Jae; Preece, Bradley

    2008-04-01

    The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the generated artifacts can adversely affect task performance. The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment of various compression techniques on static imagery for tank identification. In this paper, we expand this initial assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank identification performance. We perform a series of controlled human perception tests using three dynamic simulated scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will quantify the effect of video compression on target identification and provide a framework to evaluate video compression on future sensor systems.

  4. Multi-pass encoding of hyperspectral imagery with spectral quality control

    NASA Astrophysics Data System (ADS)

    Wasson, Steven; Walker, William

    2015-05-01

    Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).

  5. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    NASA Astrophysics Data System (ADS)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  6. Vehicle detection in aerial surveillance using dynamic Bayesian networks.

    PubMed

    Cheng, Hsu-Yung; Weng, Chih-Chia; Chen, Yi-Ying

    2012-04-01

    We present an automatic vehicle detection system for aerial surveillance in this paper. In this system, we escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixelwise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixelwise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and nonvehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixelwise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles.

  7. Mapping with MAV: Experimental Study on the Contribution of Absolute and Relative Aerial Position Control

    NASA Astrophysics Data System (ADS)

    Skaloud, J.; Rehak, M.; Lichti, D.

    2014-03-01

    This study highlights the benefit of precise aerial position control in the context of mapping using frame-based imagery taken by small UAVs. We execute several flights with a custom Micro Aerial Vehicle (MAV) octocopter over a small calibration field equipped with 90 signalized targets and 25 ground control points. The octocopter carries a consumer grade RGB camera, modified to insure precise GPS time stamping of each exposure, as well as a multi-frequency/constellation GNSS receiver. The GNSS antenna and camera are rigidly mounted together on a one-axis gimbal that allows control of the obliquity of the captured imagery. The presented experiments focus on including absolute and relative aerial control. We confirm practically that both approaches are very effective: the absolute control allows omission of ground control points while the relative requires only a minimum number of control points. Indeed, the latter method represents an attractive alternative in the context of MAVs for two reasons. First, the procedure is somewhat simplified (e.g. the lever-arm between the camera perspective and antenna phase centers does not need to be determined) and, second, its principle allows employing a single-frequency antenna and carrier-phase GNSS receiver. This reduces the cost of the system as well as the payload, which in turn increases the flying time.

  8. Acquisition of airborne imagery in support of Deepwater Horizon oil spill recovery assessments

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.; Muller-Karger, Frank E.

    2012-09-01

    Remote sensing imagery was collected from a low flying aircraft along the near coastal waters of the Florida Panhandle and northern Gulf of Mexico and into Barataria Bay, Louisiana, USA, during March 2011. Imagery was acquired from an aircraft that simultaneously collected traditional photogrammetric film imagery, digital video, digital still images, and digital hyperspectral imagery. The original purpose of the project was to collect airborne imagery to support assessment of weathered oil in littoral areas influenced by the Deepwater Horizon oil and gas spill that occurred during the spring and summer of 2010. This paper describes the data acquired and presents information that demonstrates the utility of small spatial scale imagery to detect the presence of weathered oil along littoral areas in the northern Gulf of Mexico. Flight tracks and examples of imagery collected are presented and methods used to plan and acquire the imagery are described. Results suggest weathered oil in littoral areas after the spill was contained at the source.

  9. Integrated remotely sensed datasets for disaster management

    NASA Astrophysics Data System (ADS)

    McCarthy, Timothy; Farrell, Ronan; Curtis, Andrew; Fotheringham, A. Stewart

    2008-10-01

    Video imagery can be acquired from aerial, terrestrial and marine based platforms and has been exploited for a range of remote sensing applications over the past two decades. Examples include coastal surveys using aerial video, routecorridor infrastructures surveys using vehicle mounted video cameras, aerial surveys over forestry and agriculture, underwater habitat mapping and disaster management. Many of these video systems are based on interlaced, television standards such as North America's NTSC and European SECAM and PAL television systems that are then recorded using various video formats. This technology has recently being employed as a front-line, remote sensing technology for damage assessment post-disaster. This paper traces the development of spatial video as a remote sensing tool from the early 1980s to the present day. The background to a new spatial-video research initiative based at National University of Ireland, Maynooth, (NUIM) is described. New improvements are proposed and include; low-cost encoders, easy to use software decoders, timing issues and interoperability. These developments will enable specialists and non-specialists collect, process and integrate these datasets within minimal support. This integrated approach will enable decision makers to access relevant remotely sensed datasets quickly and so, carry out rapid damage assessment during and post-disaster.

  10. Image degradation in aerial imagery duplicates. [photographic processing of photographic film and reproduction (copying)

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1975-01-01

    A series of Earth Resources Aircraft Program data flights were made over an aerial test range in Arizona for the evaluation of large cameras. Specifically, both medium altitude and high altitude flights were made to test and evaluate a series of color as well as black-and-white films. Image degradation, inherent in duplication processing, was studied. Resolution losses resulting from resolution characteristics of the film types are given. Color duplicates, in general, are shown to be degraded more than black-and-white films because of the limitations imposed by available aerial color duplicating stock. Results indicate that a greater resolution loss may be expected when the original has higher resolution. Photographs of the duplications are shown.

  11. ERTS-1 imagery use in reconnaissance prospecting: Evaluation of commercial utility of ERTS-1 imagery in structural reconnaissance for minerals and petroleum

    NASA Technical Reports Server (NTRS)

    Saunders, D. F.; Thomas, G. E. (Principal Investigator); Kinsman, F. E.; Beatty, D. F.

    1973-01-01

    The author has identified the following significant results. This study was performed to investigate applications of ERTS-1 imagery in commercial reconnaissance for mineral and hydrocarbon resources. ERTS-1 imagery collected over five areas in North America (Montana; Colorado; New Mexico-West Texas; Superior Province, Canada; and North Slope, Alaska) has been analyzed for data content including linears, lineaments, and curvilinear anomalies. Locations of these features were mapped and compared with known locations of mineral and hydrocarbon accumulations. Results were analyzed in the context of a simple-shear, block-coupling model. Data analyses have resulted in detection of new lineaments, some of which may be continental in extent, detection of many curvilinear patterns not generally seen on aerial photos, strong evidence of continental regmatic fracture patterns, and realization that geological features can be explained in terms of a simple-shear, block-coupling model. The conculsions are that ERTS-1 imagery is of great value in photogeologic/geomorphic interpretations of regional features, and the simple-shear, block-coupling model provides a means of relating data from ERTS imagery to structures that have controlled emplacement of ore deposits and hydrocarbon accumulations, thus providing a basis for a new approach for reconnaissance for mineral, uranium, gas, and oil deposits and structures.

  12. Adding Insult to Imagery? Art Education and Censorship

    ERIC Educational Resources Information Center

    Sweeny, Robert W.

    2007-01-01

    The "Adding Insult to Imagery? Artistic Responses to Censorship and Mass-Media" exhibition opened in January 16, 2006, Kipp Gallery on the Indiana University of Pennsylvania campus. Eleven gallery-based works, 9 videos, and 10 web-based artworks comprised the show; each dealt with the relationship between censorship and mass mediated…

  13. Presence for design: conveying atmosphere through video collages.

    PubMed

    Keller, I; Stappers, P J

    2001-04-01

    Product designers use imagery for inspiration in their creative design process. To support creativity, designers apply many tools and techniques, which often rely on their ability to be inspired by found and previously made visual material and to experience the atmosphere of the user environment. Computer tools and developments in VR offer perspectives to support this kind of imagery and presence in the design process. But currently these possibilities come at too high a technological overhead and price to be usable in the design practice. This article proposes an expressive and technically lightweight approach using the possibilities of VR and computer tools, by creating a sketchy environment using video collages. Instead of relying on highly realistic or even "hyperreal" graphics, these video collages use lessons learned from theater and cinema to get a sense of atmosphere across. Product designers can use these video collages to reexperience their observations in the environment in which a product is to be used, and to communicate this atmosphere to their colleagues and clients. For user-centered design, video collages can also provide an environmental context for concept testing with prospective user groups.

  14. Complex Building Detection Through Integrating LIDAR and Aerial Photos

    NASA Astrophysics Data System (ADS)

    Zhai, R.

    2015-02-01

    This paper proposes a new approach on digital building detection through the integration of LiDAR data and aerial imagery. It is known that most building rooftops are represented by different regions from different seed pixels. Considering the principals of image segmentation, this paper employs a new region based technique to segment images, combining both the advantages of LiDAR and aerial images together. First, multiple seed points are selected by taking several constraints into consideration in an automated way. Then, the region growing procedures proceed by combining the elevation attribute from LiDAR data, visibility attribute from DEM (Digital Elevation Model), and radiometric attribute from warped images in the segmentation. Through this combination, the pixels with similar height, visibility, and spectral attributes are merged into one region, which are believed to represent the whole building area. The proposed methodology was implemented on real data and competitive results were achieved.

  15. Yellow River Icicle Hazard Dynamic Monitoring Using UAV Aerial Remote Sensing Technology

    NASA Astrophysics Data System (ADS)

    Wang, H. B.; Wang, G. H.; Tang, X. M.; Li, C. H.

    2014-02-01

    Monitoring the response of Yellow River icicle hazard change requires accurate and repeatable topographic surveys. A new method based on unmanned aerial vehicle (UAV) aerial remote sensing technology is proposed for real-time data processing in Yellow River icicle hazard dynamic monitoring. The monitoring area is located in the Yellow River ice intensive care area in southern BaoTou of Inner Mongolia autonomous region. Monitoring time is from the 20th February to 30th March in 2013. Using the proposed video data processing method, automatic extraction covering area of 7.8 km2 of video key frame image 1832 frames took 34.786 seconds. The stitching and correcting time was 122.34 seconds and the accuracy was better than 0.5 m. Through the comparison of precise processing of sequence video stitching image, the method determines the change of the Yellow River ice and locates accurate positioning of ice bar, improving the traditional visual method by more than 100 times. The results provide accurate aid decision information for the Yellow River ice prevention headquarters. Finally, the effect of dam break is repeatedly monitored and ice break five meter accuracy is calculated through accurate monitoring and evaluation analysis.

  16. Preliminary statistical studies concerning the Campos RJ sugar cane area, using LANDSAT imagery and aerial photographs

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Costa, S. R. X.; Paiao, L. B. F.; Mendonca, F. J.; Shimabukuro, Y. E.; Duarte, V.

    1983-01-01

    The two phase sampling technique was applied to estimate the area cultivated with sugar cane in an approximately 984 sq km pilot region of Campos. Correlation between existing aerial photography and LANDSAT data was used. The two phase sampling technique corresponded to 99.6% of the results obtained by aerial photography, taken as ground truth. This estimate has a standard deviation of 225 ha, which constitutes a coefficient of variation of 0.6%.

  17. Remote sensing and GIS integration: Towards intelligent imagery within a spatial data infrastructure

    NASA Astrophysics Data System (ADS)

    Abdelrahim, Mohamed Mahmoud Hosny

    2001-11-01

    In this research, an "Intelligent Imagery System Prototype" (IISP) was developed. IISP is an integration tool that facilitates the environment for active, direct, and on-the-fly usage of high resolution imagery, internally linked to hidden GIS vector layers, to query the real world phenomena and, consequently, to perform exploratory types of spatial analysis based on a clear/undisturbed image scene. The IISP was designed and implemented using the software components approach to verify the hypothesis that a fully rectified, partially rectified, or even unrectified digital image can be internally linked to a variety of different hidden vector databases/layers covering the end user area of interest, and consequently may be reliably used directly as a base for "on-the-fly" querying of real-world phenomena and for performing exploratory types of spatial analysis. Within IISP, differentially rectified, partially rectified (namely, IKONOS GEOCARTERRA(TM)), and unrectified imagery (namely, scanned aerial photographs and captured video frames) were investigated. The system was designed to handle four types of spatial functions, namely, pointing query, polygon/line-based image query, database query, and buffering. The system was developed using ESRI MapObjects 2.0a as the core spatial component within Visual Basic 6.0. When used to perform the pre-defined spatial queries using different combinations of image and vector data, the IISP provided the same results as those obtained by querying pre-processed vector layers even when the image used was not orthorectified and the vector layers had different parameters. In addition, the real-time pixel location orthorectification technique developed and presented within the IKONOS GEOCARTERRA(TM) case provided a horizontal accuracy (RMSE) of +/- 2.75 metres. This accuracy is very close to the accuracy level obtained when purchasing the orthorectified IKONOS PRECISION products (RMSE of +/- 1.9 metre). The latter cost approximately four

  18. Suitability of low cost commercial off-the-shelf aerial platforms and consumer grade digital cameras for small format aerial photography

    NASA Astrophysics Data System (ADS)

    Turley, Anthony Allen

    Many research projects require the use of aerial images. Wetlands evaluation, crop monitoring, wildfire management, environmental change detection, and forest inventory are but a few of the applications of aerial imagery. Low altitude Small Format Aerial Photography (SFAP) is a bridge between satellite and man-carrying aircraft image acquisition and ground-based photography. The author's project evaluates digital images acquired using low cost commercial digital cameras and standard model airplanes to determine their suitability for remote sensing applications. Images from two different sites were obtained. Several photo missions were flown over each site, acquiring images in the visible and near infrared electromagnetic bands. Images were sorted and analyzed to select those with the least distortion, and blended together with Microsoft Image Composite Editor. By selecting images taken within minutes apart, radiometric qualities of the images were virtually identical, yielding no blend lines in the composites. A commercial image stitching program, Autopano Pro, was purchased during the later stages of this study. Autopano Pro was often able to mosaic photos that the free Image Composite Editor was unable to combine. Using telemetry data from an onboard data logger, images were evaluated to calculate scale and spatial resolution. ERDAS ER Mapper and ESRI ArcGIS were used to rectify composite images. Despite the limitations inherent in consumer grade equipment, images of high spatial resolution were obtained. Mosaics of as many as 38 images were created, and the author was able to record detailed aerial images of forest and wetland areas where foot travel was impractical or impossible.

  19. Aerial surveys adjusted by ground surveys to estimate area occupied by black-tailed prairie dog colonies

    USGS Publications Warehouse

    Sidle, John G.; Augustine, David J.; Johnson, Douglas H.; Miller, Sterling D.; Cully, Jack F.; Reading, Richard P.

    2012-01-01

    Aerial surveys using line-intercept methods are one approach to estimate the extent of prairie dog colonies in a large geographic area. Although black-tailed prairie dogs (Cynomys ludovicianus) construct conspicuous mounds at burrow openings, aerial observers have difficulty discriminating between areas with burrows occupied by prairie dogs (colonies) versus areas of uninhabited burrows (uninhabited colony sites). Consequently, aerial line-intercept surveys may overestimate prairie dog colony extent unless adjusted by an on-the-ground inspection of a sample of intercepts. We compared aerial line-intercept surveys conducted over 2 National Grasslands in Colorado, USA, with independent ground-mapping of known black-tailed prairie dog colonies. Aerial line-intercepts adjusted by ground surveys using a single activity category adjustment overestimated colonies by ≥94% on the Comanche National Grassland and ≥58% on the Pawnee National Grassland. We present a ground-survey technique that involves 1) visiting on the ground a subset of aerial intercepts classified as occupied colonies plus a subset of intercepts classified as uninhabited colony sites, and 2) based on these ground observations, recording the proportion of each aerial intercept that intersects a colony and the proportion that intersects an uninhabited colony site. Where line-intercept techniques are applied to aerial surveys or remotely sensed imagery, this method can provide more accurate estimates of black-tailed prairie dog abundance and trends

  20. Evaluation of unmanned aerial vehicles (UAVs) for detection of cattle in the Cattle Fever Tick Permanent Quarantine Zone

    USDA-ARS?s Scientific Manuscript database

    An unmanned aerial vehicle was used to capture videos of cattle in pastures to determine the efficiency of this technology for use by Mounted Inspectors in the Permanent Quarantine zone (PQZ) of the Cattle Fever Tick Eradication Program in south Texas along the U.S.-Mexico Border. These videos were ...

  1. High-resolution streaming video integrated with UGS systems

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew

    2010-04-01

    Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.

  2. BisQue: cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Fedorov, D.; Miller, R. J.; Kvilekval, K. G.; Doheny, B.; Sampson, S.; Manjunath, B. S.

    2016-02-01

    Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification and detection tasks but are typically very computationally expensive and require extensive training on large datasets. Therefore, managing and connecting distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery and associated data. Designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and inhomogeneous computational environments behind a user friendly web-based interface, BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. Presently we are developing BisQue-based analysis modules for automated identification of benthic marine organisms. Recent experiments with drop-out and CNN based classification of several thousand annotated underwater images demonstrated an overall accuracy above 70% for the 15 best performing species and above 85% for the top 5 species. Based on these promising results, we have extended bisque with a CNN-based classification system allowing continuous training on user-provided data.

  3. Automated Verification of Spatial Resolution in Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald

    2011-01-01

    Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data

  4. Detection of Tree Crowns Based on Reclassification Using Aerial Images and LIDAR Data

    NASA Astrophysics Data System (ADS)

    Talebi, S.; Zarea, A.; Sadeghian, S.; Arefi, H.

    2013-09-01

    Tree detection using aerial sensors in early decades was focused by many researchers in different fields including Remote Sensing and Photogrammetry. This paper is intended to detect trees in complex city areas using aerial imagery and laser scanning data. Our methodology is a hierarchal unsupervised method consists of some primitive operations. This method could be divided into three sections, in which, first section uses aerial imagery and both second and third sections use laser scanners data. In the first section a vegetation cover mask is created in both sunny and shadowed areas. In the second section Rate of Slope Change (RSC) is used to eliminate grasses. In the third section a Digital Terrain Model (DTM) is obtained from LiDAR data. By using DTM and Digital Surface Model (DSM) we would get to Normalized Digital Surface Model (nDSM). Then objects which are lower than a specific height are eliminated. Now there are three result layers from three sections. At the end multiplication operation is used to get final result layer. This layer will be smoothed by morphological operations. The result layer is sent to WG III/4 to evaluate. The evaluation result shows that our method has a good rank in comparing to other participants' methods in ISPRS WG III/4, when assessed in terms of 5 indices including area base completeness, area base correctness, object base completeness, object base correctness and boundary RMS. With regarding of being unsupervised and automatic, this method is improvable and could be integrate with other methods to get best results.

  5. Aerial Images from AN Uav System: 3d Modeling and Tree Species Classification in a Park Area

    NASA Astrophysics Data System (ADS)

    Gini, R.; Passoni, D.; Pinto, L.; Sona, G.

    2012-07-01

    The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs) is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment): it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes). Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs) were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS) and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.

  6. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  7. Unmanned aerial systems for photogrammetry and remote sensing: A review

    NASA Astrophysics Data System (ADS)

    Colomina, I.; Molina, P.

    2014-06-01

    We discuss the evolution and state-of-the-art of the use of Unmanned Aerial Systems (UAS) in the field of Photogrammetry and Remote Sensing (PaRS). UAS, Remotely-Piloted Aerial Systems, Unmanned Aerial Vehicles or simply, drones are a hot topic comprising a diverse array of aspects including technology, privacy rights, safety and regulations, and even war and peace. Modern photogrammetry and remote sensing identified the potential of UAS-sourced imagery more than thirty years ago. In the last five years, these two sister disciplines have developed technology and methods that challenge the current aeronautical regulatory framework and their own traditional acquisition and processing methods. Navety and ingenuity have combined off-the-shelf, low-cost equipment with sophisticated computer vision, robotics and geomatic engineering. The results are cm-level resolution and accuracy products that can be generated even with cameras costing a few-hundred euros. In this review article, following a brief historic background and regulatory status analysis, we review the recent unmanned aircraft, sensing, navigation, orientation and general data processing developments for UAS photogrammetry and remote sensing with emphasis on the nano-micro-mini UAS segment.

  8. Oblique Aerial Photography Tool for Building Inspection and Damage Assessment

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.

    2014-11-01

    Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.

  9. Unmanned Aerial Vehicle (UAV) Dynamic-Tracking Directional Wireless Antennas for Low Powered Applications that Require Reliable Extended Range Operations in Time Critical Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott G. Bauer; Matthew O. Anderson; James R. Hanneman

    2005-10-01

    The proven value of DOD Unmanned Aerial Vehicles (UAVs) will ultimately transition to National and Homeland Security missions that require real-time aerial surveillance, situation awareness, force protection, and sensor placement. Public services first responders who routinely risk personal safety to assess and report a situation for emergency actions will likely be the first to benefit from these new unmanned technologies. ‘Packable’ or ‘Portable’ small class UAVs will be particularly useful to the first responder. They require the least amount of training, no fixed infrastructure, and are capable of being launched and recovered from the point of emergency. All UAVs requiremore » wireless communication technologies for real- time applications. Typically on a small UAV, a low bandwidth telemetry link is required for command and control (C2), and systems health monitoring. If the UAV is equipped with a real-time Electro-Optical or Infrared (EO/Ir) video camera payload, a dedicated high bandwidth analog/digital link is usually required for reliable high-resolution imagery. In most cases, both the wireless telemetry and real-time video links will be integrated into the UAV with unity gain omni-directional antennas. With limited on-board power and payload capacity, a small UAV will be limited with the amount of radio-frequency (RF) energy it transmits to the users. Therefore, ‘packable’ and ‘portable’ UAVs will have limited useful operational ranges for first responders. This paper will discuss the limitations of small UAV wireless communications. The discussion will present an approach of utilizing a dynamic ground based real-time tracking high gain directional antenna to provide extend range stand-off operation, potential RF channel reuse, and assured telemetry and data communications from low-powered UAV deployed wireless assets.« less

  10. Use of multi-temporal UAV-derived imagery for estimating individual tree growth in Pinus pinea stands

    Treesearch

    Juan Guerra-Hernández; Eduardo González-Ferreiro; Vicente Monleon; Sonia Faias; Margarida Tomé; Ramón Díaz-Varela

    2017-01-01

    High spatial resolution imagery provided by unmanned aerial vehicles (UAVs) can yield accurate and efficient estimation of tree dimensions and canopy structural variables at the local scale. We flew a low-cost, lightweight UAV over an experimental Pinus pinea L. plantation (290 trees distributed over 16 ha with different fertirrigation treatments)...

  11. High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.

    2015-01-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model

  12. Modeling vegetation heights from high resolution stereo aerial photography: an application for broad-scale rangeland monitoring.

    PubMed

    Gillan, Jeffrey K; Karl, Jason W; Duniway, Michael; Elaksher, Ahmed

    2014-11-01

    Vertical vegetation structure in rangeland ecosystems can be a valuable indicator for assessing rangeland health and monitoring riparian areas, post-fire recovery, available forage for livestock, and wildlife habitat. Federal land management agencies are directed to monitor and manage rangelands at landscapes scales, but traditional field methods for measuring vegetation heights are often too costly and time consuming to apply at these broad scales. Most emerging remote sensing techniques capable of measuring surface and vegetation height (e.g., LiDAR or synthetic aperture radar) are often too expensive, and require specialized sensors. An alternative remote sensing approach that is potentially more practical for managers is to measure vegetation heights from digital stereo aerial photographs. As aerial photography is already commonly used for rangeland monitoring, acquiring it in stereo enables three-dimensional modeling and estimation of vegetation height. The purpose of this study was to test the feasibility and accuracy of estimating shrub heights from high-resolution (HR, 3-cm ground sampling distance) digital stereo-pair aerial images. Overlapping HR imagery was taken in March 2009 near Lake Mead, Nevada and 5-cm resolution digital surface models (DSMs) were created by photogrammetric methods (aerial triangulation, digital image matching) for twenty-six test plots. We compared the heights of individual shrubs and plot averages derived from the DSMs to field measurements. We found strong positive correlations between field and image measurements for several metrics. Individual shrub heights tended to be underestimated in the imagery, however, accuracy was higher for dense, compact shrubs compared with shrubs with thin branches. Plot averages of shrub height from DSMs were also strongly correlated to field measurements but consistently underestimated. Grasses and forbs were generally too small to be detected with the resolution of the DSMs. Estimates of

  13. Science documentary video slides to enhance education and communication

    NASA Astrophysics Data System (ADS)

    Byrne, J. M.; Little, L. J.; Dodgson, K.

    2010-12-01

    Documentary production can convey powerful messages using a combination of authentic science and reinforcing video imagery. Conventional documentary production contains too much information for many viewers to follow; hence many powerful points may be lost. But documentary productions that are re-edited into short video sequences and made available through web based video servers allow the teacher/viewer to access the material as video slides. Each video slide contains one critical discussion segment of the larger documentary. A teacher/viewer can review the documentary one segment at a time in a class room, public forum, or in the comfort of home. The sequential presentation of the video slides allows the viewer to best absorb the documentary message. The website environment provides space for additional questions and discussion to enhance the video message.

  14. Enhancing voluntary imitation through attention and motor imagery.

    PubMed

    Bek, Judith; Poliakoff, Ellen; Marshall, Hannah; Trueman, Sophie; Gowen, Emma

    2016-07-01

    Action observation activates brain areas involved in performing the same action and has been shown to increase motor learning, with potential implications for neurorehabilitation. Recent work indicates that the effects of action observation on movement can be increased by motor imagery or by directing attention to observed actions. In voluntary imitation, activation of the motor system during action observation is already increased. We therefore explored whether imitation could be further enhanced by imagery or attention. Healthy participants observed and then immediately imitated videos of human hand movement sequences, while movement kinematics were recorded. Two blocks of trials were completed, and after the first block participants were instructed to imagine performing the observed movement (Imagery group, N = 18) or attend closely to the characteristics of the movement (Attention group, N = 15), or received no further instructions (Control group, N = 17). Kinematics of the imitated movements were modulated by instructions, with both Imagery and Attention groups being closer in duration, peak velocity and amplitude to the observed model compared with controls. These findings show that both attention and motor imagery can increase the accuracy of imitation and have implications for motor learning and rehabilitation. Future work is required to understand the mechanisms by which these two strategies influence imitation accuracy.

  15. Motivational Videos and the Library Media Specialist: Teachers and Students on Film--Take 1

    ERIC Educational Resources Information Center

    Bohot, Cameron Brooke; Pfortmiller, Michelle

    2009-01-01

    Today's students are bombarded with digital imagery and sound nearly 24 hours of the day. Video use in the classroom is engaging, and a teacher can instantly grab her students' attention. The content of the videos comes from many sources; the curriculum, the student handbook, and even the school rules. By creating the videos, teachers are not only…

  16. Application of ERTS imagery in estimating the environmental impact of a freeway through the Knysna area of South Africa

    NASA Technical Reports Server (NTRS)

    Williamson, D. T.; Gilbertson, B.

    1974-01-01

    In the coastal areas north-east and south-west of Knysna, South Africa lie natural forests, lakes and lagoons highly regarded by many for their aesthetic and ecological richness. A freeway construction project has given rise to fears of the degradation or destruction of these natural features. The possibility was investigated of using ERTS imagery to estimate the environmental impact of the freeway and found that: (1) All threatened features could readily be identified on the imagery. (2) It was possible within a short time to provide an area estimate of damage to indigenous forest. (3) In several important respects the imagery has advantages over maps and aerial photos for this type of work. (4) The imagery will enable monitoring of the actual environmental impact of the freeway when completed.

  17. UFCN: a fully convolutional neural network for road extraction in RGB imagery acquired by remote sensing from an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Kestur, Ramesh; Farooq, Shariq; Abdal, Rameen; Mehraj, Emad; Narasipura, Omkar; Mudigere, Meenavathi

    2018-01-01

    Road extraction in imagery acquired by low altitude remote sensing (LARS) carried out using an unmanned aerial vehicle (UAV) is presented. LARS is carried out using a fixed wing UAV with a high spatial resolution vision spectrum (RGB) camera as the payload. Deep learning techniques, particularly fully convolutional network (FCN), are adopted to extract roads by dense semantic segmentation. The proposed model, UFCN (U-shaped FCN) is an FCN architecture, which is comprised of a stack of convolutions followed by corresponding stack of mirrored deconvolutions with the usage of skip connections in between for preserving the local information. The limited dataset (76 images and their ground truths) is subjected to real-time data augmentation during training phase to increase the size effectively. Classification performance is evaluated using precision, recall, accuracy, F1 score, and brier score parameters. The performance is compared with support vector machine (SVM) classifier, a one-dimensional convolutional neural network (1D-CNN) model, and a standard two-dimensional CNN (2D-CNN). The UFCN model outperforms the SVM, 1D-CNN, and 2D-CNN models across all the performance parameters. Further, the prediction time of the proposed UFCN model is comparable with SVM, 1D-CNN, and 2D-CNN models.

  18. Effect of instructive visual stimuli on neurofeedback training for motor imagery-based brain-computer interface.

    PubMed

    Kondo, Toshiyuki; Saeki, Midori; Hayashi, Yoshikatsu; Nakayashiki, Kosei; Takata, Yohei

    2015-10-01

    Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI neurofeedback training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Development of an autonomous video rendezvous and docking system, phase 2

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Richardson, T. E.

    1983-01-01

    The critical elements of an autonomous video rendezvous and docking system were built and used successfully in a physical laboratory simulation. The laboratory system demonstrated that a small, inexpensive electronic package and a flight computer of modest size can analyze television images to derive guidance information for spacecraft. In the ultimate application, the system would use a docking aid consisting of three flashing lights mounted on a passive target spacecraft. Television imagery of the docking aid would be processed aboard an active chase vehicle to derive relative positions and attitudes of the two spacecraft. The demonstration system used scale models of the target spacecraft with working docking aids. A television camera mounted on a 6 degree of freedom (DOF) simulator provided imagery of the target to simulate observations from the chase vehicle. A hardware video processor extracted statistics from the imagery, from which a computer quickly computed position and attitude. Computer software known as a Kalman filter derived velocity information from position measurements.

  20. OceanVideoLab: A Tool for Exploring Underwater Video

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Wiener, C.

    2016-02-01

    Video imagery acquired with underwater vehicles is an essential tool for characterizing seafloor ecosystems and seafloor geology. It is a fundamental component of ocean exploration that facilitates real-time operations, augments multidisciplinary scientific research, and holds tremendous potential for public outreach and engagement. Acquiring, documenting, managing, preserving and providing access to large volumes of video acquired with underwater vehicles presents a variety of data stewardship challenges to the oceanographic community. As a result, only a fraction of underwater video content collected with research submersibles is documented, discoverable and/or viewable online. With more than 1 billion users, YouTube offers infrastructure that can be leveraged to help address some of the challenges associated with sharing underwater video with a broad global audience. Anyone can post content to YouTube, and some oceanographic organizations, such as the Schmidt Ocean Institute, have begun live-streaming video directly from underwater vehicles. OceanVideoLab (oceanvideolab.org) was developed to help improve access to underwater video through simple annotation, browse functionality, and integration with related environmental data. Any underwater video that is publicly accessible on YouTube can be registered with OceanVideoLab by simply providing a URL. It is strongly recommended that a navigational file also be supplied to enable geo-referencing of observations. Once a video is registered, it can be viewed and annotated using a simple user interface that integrates observations with vehicle navigation data if provided. This interface includes an interactive map and a list of previous annotations that allows users to jump to times of specific observations in the video. Future enhancements to OceanVideoLab will include the deployment of a search interface, the development of an application program interface (API) that will drive the search and enable querying of

  1. Enabling high-quality observations of surface imperviousness for water runoff modelling from unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Tokarczyk, Piotr; Leitao, Joao Paulo; Rieckermann, Jörg; Schindler, Konrad; Blumensaat, Frank

    2015-04-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual sub-catchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model

  2. Using drone-mounted cameras for on-site body documentation: 3D mapping and active survey.

    PubMed

    Urbanová, Petra; Jurda, Mikoláš; Vojtíšek, Tomáš; Krajsa, Jan

    2017-12-01

    Recent advances in unmanned aerial technology have substantially lowered the cost associated with aerial imagery. As a result, forensic practitioners are today presented with easy low-cost access to aerial photographs at remote locations. The present paper aims to explore boundaries in which the low-end drone technology can operate as professional crime scene equipment, and to test the prospects of aerial 3D modeling in the forensic context. The study was based on recent forensic cases of falls from height admitted for postmortem examinations. Three mock outdoor forensic scenes featuring a dummy, skeletal remains and artificial blood were constructed at an abandoned quarry and subsequently documented using a commercial DJI Phantom 2 drone equipped with a GoPro HERO 4 digital camera. In two of the experiments, the purpose was to conduct aerial and ground-view photography and to process the acquired images with a photogrammetry protocol (using Agisoft PhotoScan ® 1.2.6) in order to generate 3D textured models. The third experiment tested the employment of drone-based video recordings in mapping scattered body parts. The results show that drone-based aerial photography is capable of producing high-quality images, which are appropriate for building accurate large-scale 3D models of a forensic scene. If, however, high-resolution top-down three-dimensional scene documentation featuring details on a corpse or other physical evidence is required, we recommend building a multi-resolution model by processing aerial and ground-view imagery separately. The video survey showed that using an overview recording for seeking out scattered body parts was efficient. In contrast, the less easy-to-spot evidence, such as bloodstains, was detected only after having been marked properly with crime scene equipment. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Integrating unmanned aerial systems and LSPIV for rapid, cost-effective stream gauging

    NASA Astrophysics Data System (ADS)

    Lewis, Quinn W.; Lindroth, Evan M.; Rhoads, Bruce L.

    2018-05-01

    Quantifying flow in rivers is fundamental to assessments of water supply, water quality, ecological conditions, hydrological responses to storm events, and geomorphological processes. Image-based surface velocity measurements have shown promise in extending the range of discharge conditions that can be measured in the field. The use of Unmanned Aerial Systems (UAS) in image-based measurements of surface velocities has the potential to expand applications of this method. Thus far, few investigations have assessed this potential by evaluating the accuracy and repeatability of discharge measurements using surface velocities obtained from UAS. This study uses large-scale particle image velocimetry (LSPIV) derived from videos captured by cameras on a UAS and a fixed tripod to obtain discharge measurements at ten different stream locations in Illinois, USA. Discharge values are compared to reference values measured by an acoustic Doppler current profiler, a propeller meter, and established stream gauges. The results demonstrate the effects of UAS flight height, camera steadiness and leveling accuracy, video sampling frequency, and LSPIV interrogation area size on surface velocities, and show that the mean difference between fixed and UAS cameras is less than 10%. Differences between LSPIV-derived and reference discharge values are generally less than 20%, not systematically low or high, and not related to site parameters like channel width or depth, indicating that results are relatively insensitive to camera setup and image processing parameters typically required of LSPIV. The results also show that standard velocity indices (between 0.85 and 0.9) recommended for converting surface velocities to depth-averaged velocities yield reasonable discharge estimates, but are best calibrated at specific sites. The study recommends a basic methodology for LSPIV discharge measurements using UAS that is rapid, cost-efficient, and does not require major preparatory work at a

  4. Wetland mapping from digitized aerial photography. [Sheboygen Marsh, Sheboygen County, Wisconsin

    NASA Technical Reports Server (NTRS)

    Scarpace, F. L.; Quirk, B. K.; Kiefer, R. W.; Wynn, S. L.

    1981-01-01

    Computer assisted interpretation of small scale aerial imagery was found to be a cost effective and accurate method of mapping complex vegetation patterns if high resolution information is desired. This type of technique is suited for problems such as monitoring changes in species composition due to environmental factors and is a feasible method of monitoring and mapping large areas of wetlands. The technique has the added advantage of being in a computer compatible form which can be transformed into any georeference system of interest.

  5. Identification of irrigated crop types from ERTS-1 density contour maps and color infrared aerial photography. [Wyoming

    NASA Technical Reports Server (NTRS)

    Marrs, R. W.; Evans, M. A.

    1974-01-01

    The author has identified the following significant results. The crop types of a Great Plains study area were mapped from color infrared aerial photography. Each field was positively identified from field checks in the area. Enlarged (50x) density contour maps were constructed from three ERTS-1 images taken in the summer of 1973. The map interpreted from the aerial photography was compared to the density contour maps and the accuracy of the ERTS-1 density contour map interpretations were determined. Changes in the vegetation during the growing season and harvest periods were detectable on the ERTS-1 imagery. Density contouring aids in the detection of such charges.

  6. Imagery analysis and the need for standards

    NASA Astrophysics Data System (ADS)

    Grant, Barbara G.

    2014-09-01

    While efforts within the optics community focus on the development of high-quality systems and data products, comparatively little attention is paid to their use. Our standards for verification and validation are high; but in some user domains, standards are either lax or do not exist at all. In forensic imagery analysis, for example, standards exist to judge image quality, but do not exist to judge the quality of an analysis. In litigation, a high quality analysis is by default the one performed by the victorious attorney's expert. This paper argues for the need to extend quality standards into the domain of imagery analysis, which is expected to increase in national visibility and significance with the increasing deployment of unmanned aerial vehicle—UAV, or "drone"—sensors in the continental U. S.. It argues that like a good radiometric calibration, made as independent of the calibrated instrument as possible, a good analysis should be subject to standards the most basic of which is the separation of issues of scientific fact from analysis results.

  7. D Object Classification Based on Thermal and Visible Imagery in Urban Area

    NASA Astrophysics Data System (ADS)

    Hasani, H.; Samadzadegan, F.

    2015-12-01

    The spatial distribution of land cover in the urban area especially 3D objects (buildings and trees) is a fundamental dataset for urban planning, ecological research, disaster management, etc. According to recent advances in sensor technologies, several types of remotely sensed data are available from the same area. Data fusion has been widely investigated for integrating different source of data in classification of urban area. Thermal infrared imagery (TIR) contains information on emitted radiation and has unique radiometric properties. However, due to coarse spatial resolution of thermal data, its application has been restricted in urban areas. On the other hand, visible image (VIS) has high spatial resolution and information in visible spectrum. Consequently, there is a complementary relation between thermal and visible imagery in classification of urban area. This paper evaluates the potential of aerial thermal hyperspectral and visible imagery fusion in classification of urban area. In the pre-processing step, thermal imagery is resampled to the spatial resolution of visible image. Then feature level fusion is applied to construct hybrid feature space include visible bands, thermal hyperspectral bands, spatial and texture features and moreover Principle Component Analysis (PCA) transformation is applied to extract PCs. Due to high dimensionality of feature space, dimension reduction method is performed. Finally, Support Vector Machines (SVMs) classify the reduced hybrid feature space. The obtained results show using thermal imagery along with visible imagery, improved the classification accuracy up to 8% respect to visible image classification.

  8. Quantitative analysis of drainage obtained from aerial photographs and RBV/LANDSAT images

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Formaggio, A. R.; Epiphanio, J. C. N.; Filho, M. V.

    1981-01-01

    Data obtained from aerial photographs (1:60,000) and LANDSAT return beam vidicon imagery (1:100,000) concerning drainage density, drainage texture, hydrography density, and the average length of channels were compared. Statistical analysis shows that significant differences exist in data from the two sources. The highly drained area lost more information than the less drained area. In addition, it was observed that the loss of information about the number of rivers was higher than that about the length of the channels.

  9. Assessing the spatial distribution of coral bleaching using small unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Levy, Joshua; Hunter, Cynthia; Lukacazyk, Trent; Franklin, Erik C.

    2018-06-01

    Small unmanned aerial systems (sUAS) are an affordable, effective complement to existing coral reef monitoring and assessment tools. sUAS provide repeatable low-altitude, high-resolution photogrammetry to address fundamental questions of spatial ecology and community dynamics for shallow coral reef ecosystems. Here, we qualitatively describe the use of sUAS to survey the spatial characteristics of coral cover and the distribution of coral bleaching across patch reefs in Kānéohe Bay, Hawaii, and address limitations and anticipated technology advancements within the field of UAS. Overlapping sub-decimeter low-altitude aerial reef imagery collected during the 2015 coral bleaching event was used to construct high-resolution reef image mosaics of coral bleaching responses on four Kānéohe Bay patch reefs, totaling 60,000 m2. Using sUAS imagery, we determined that paled, bleached and healthy corals on all four reefs were spatially clustered. Comparative analyses of data from sUAS imagery and in situ diver surveys found as much as 14% difference in coral cover values between survey methods, depending on the size of the reef and area surveyed. When comparing the abundance of unhealthy coral (paled and bleached) between sUAS and in situ diver surveys, we found differences in cover from 1 to 49%, depending on the depth of in situ surveys, the percent of reef area covered with sUAS surveys and patchiness of the bleaching response. This study demonstrates the effective use of sUAS surveys for assessing the spatial dynamics of coral bleaching at colony-scale resolutions across entire patch reefs and evaluates the complementarity of data from both sUAS and in situ diver surveys to more accurately characterize the spatial ecology of coral communities on reef flats and slopes.

  10. Unmanned aerial systems for forest reclamation monitoring: throwing balloons in the air

    NASA Astrophysics Data System (ADS)

    Andrade, Rita; Vaz, Eric; Panagopoulos, Thomas; Guerrero, Carlos

    2014-05-01

    Wildfires are a recurrent phenomenon in Mediterranean landscapes, deteriorating environment and ecosystems, calling out for adequate land management. Monitoring burned areas enhances our abilities to reclaim them. Remote sensing has become an increasingly important tool for environmental assessment and land management. It is fast, non-intrusive, and provides continuous spatial coverage. This paper reviews remote sensing methods, based on space-borne, airborne or ground-based multispectral imagery, for monitoring the biophysical properties of forest areas for site specific management. The usage of satellite imagery for land use management has been frequent in the last decades, it is of great use to determine plants health and crop conditions, allowing a synergy between the complexity of environment, anthropogenic landscapes and multi-temporal understanding of spatial dynamics. Aerial photography increments on spatial resolution, nevertheless it is heavily dependent on airborne availability as well as cost. Both these methods are required for wide areas management and policy planning. Comprising an active and high resolution imagery source, that can be brought at a specific instance, reducing cost while maintaining locational flexibility is of utmost importance for local management. In this sense, unmanned aerial vehicles provide maximum flexibility with image collection, they can incorporate thermal and multispectral sensors, however payload and engine operation time limit flight time. Balloon remote sensing is becoming increasingly sought after for site specific management, catering rapid digital analysis, permitting greater control of the spatial resolution as well as of datasets collection in a given time. Different wavelength sensors may be used to map spectral variations in plant growth, monitor water and nutrient stress, assess yield and plant vitality during different stages of development. Proximity could be an asset when monitoring forest plants vitality

  11. Facilitating Social Initiations of Preschoolers with Autism Spectrum Disorders Using Video Self-Modeling

    ERIC Educational Resources Information Center

    Buggey, Tom; Hoomes, Grace; Sherberger, Mary Elizabeth; Williams, Sarah

    2011-01-01

    Video self-modeling (VSM) has accumulated a relatively impressive track record in the research literature across behaviors, ages, and types of disabilities. Using only positive imagery, VSM gives individuals the opportunity to view themselves performing a task just beyond their present functioning level via creative editing of videos using VCRs or…

  12. Classification of wetlands vegetation using small scale color infrared imagery

    NASA Technical Reports Server (NTRS)

    Williamson, F. S. L.

    1975-01-01

    A classification system for Chesapeake Bay wetlands was derived from the correlation of film density classes and actual vegetation classes. The data processing programs used were developed by the Laboratory for the Applications of Remote Sensing. These programs were tested for their value in classifying natural vegetation, using digitized data from small scale aerial photography. Existing imagery and the vegetation map of Farm Creek Marsh were used to determine the optimal number of classes, and to aid in determining if the computer maps were a believable product.

  13. The Potential Uses of Commercial Satellite Imagery in the Middle East

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vannoni, M.G.

    1999-06-08

    It became clear during the workshop that the applicability of commercial satellite imagery to the verification of future regional arms control agreements is limited at this time. Non-traditional security topics such as environmental protection, natural resource management, and the development of infrastructure offer the more promising applications for commercial satellite imagery in the short-term. Many problems and opportunities in these topics are regional, or at least multilateral, in nature. A further advantage is that, unlike arms control and nonproliferation applications, cooperative use of imagery in these topics can be done independently of the formal Middle East Peace Process. The valuemore » of commercial satellite imagery to regional arms control and nonproliferation, however, will increase during the next three years as new, more capable satellite systems are launched. Aerial imagery, such as that used in the Open Skies Treaty, can also make significant contributions to both traditional and non-traditional security applications but has the disadvantage of requiring access to national airspace and potentially higher cost. There was general consensus that commercial satellite imagery is under-utilized in the Middle East and resources for remote sensing, both human and institutional, are limited. This relative scarcity, however, provides a natural motivation for collaboration in non-traditional security topics. Collaborations between scientists, businesses, universities, and non-governmental organizations can work at the grass-roots level and yield contributions to confidence building as well as scientific and economic results. Joint analysis projects would benefit the region as well as establish precedents for cooperation.« less

  14. Evaluation of Skybox Video and Still Image products

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Kuschk, G.; Reinartz, P.

    2014-11-01

    The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.

  15. Assessing the impacts of canopy openness and flight parameters on detecting a sub-canopy tropical invasive plant using a small unmanned aerial system

    NASA Astrophysics Data System (ADS)

    Perroy, Ryan L.; Sullivan, Timo; Stephenson, Nathan

    2017-03-01

    Small unmanned aerial systems (sUAS) have great potential to facilitate the early detection and management of invasive plants. Here we show how very high-resolution optical imagery, collected from small consumer-grade multirotor UAS platform at altitudes of 30-120 m above ground level (agl), can be used to detect individual miconia (Miconia calvescens) plants in a highly invaded tropical rainforest environment on the island of Hawai'i. The central aim of this research was to determine how overstory vegetation cover, imagery resolution, and camera look-angle impact the aerial detection of known individual miconia plants. For our finest resolution imagery (1.37 cm ground sampling distance collected at 30 m agl), we obtained a 100% detection rate for sub-canopy plants with above-crown openness values >40% and a 69% detection rate for those with >20% openness. We were unable to detect any plants with <10% above crown openness. Detection rates progressively declined with coarser spatial resolution imagery, ending in a 0% detection rate for the 120 m agl flights (ground sampling distance of 5.31 cm). The addition of forward-looking oblique imagery improved detection rates for plants below overstory vegetation, though this effect decreased with increasing flight altitude. While dense overstory canopy cover, limited flight times, and visual line of sight regulations present formidable obstacles for detecting miconia and other invasive plant species, we show that sUAS platforms carrying optical sensors can be an effective component of an integrated management plan within challenging subcanopy forest environments.

  16. Chosen Aspects of the Production of the Basic Map Using Uav Imagery

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Fryskowska, A.; Wierzbicki, D.; Nerc, P.

    2016-06-01

    For several years there has been an increasing interest in the use of unmanned aerial vehicles in acquiring image data from a low altitude. Considering the cost-effectiveness of the flight time of UAVs vs. conventional airplanes, the use of the former is advantageous when generating large scale accurate ortophotos. Through the development of UAV imagery, we can update large-scale basic maps. These maps are cartographic products which are used for registration, economic, and strategic planning. On the basis of these maps other cartographic maps are produced, for example maps used building planning. The article presents an assessesment of the usefulness of orthophotos based on UAV imagery to upgrade the basic map. In the research a compact, non-metric camera, mounted on a fixed wing powered by an electric motor was used. The tested area covered flat, agricultural and woodland terrains. The processing and analysis of orthorectification were carried out with the INPHO UASMaster programme. Due to the effect of UAV instability on low-altitude imagery, the use of non-metric digital cameras and the low-accuracy GPS-INS sensors, the geometry of images is visibly lower were compared to conventional digital aerial photos (large values of phi and kappa angles). Therefore, typically, low-altitude images require large along- and across-track direction overlap - usually above 70 %. As a result of the research orthoimages were obtained with a resolution of 0.06 meters and a horizontal accuracy of 0.10m. Digitized basic maps were used as the reference data. The accuracy of orthoimages vs. basic maps was estimated based on the study and on the available reference sources. As a result, it was found that the geometric accuracy and interpretative advantages of the final orthoimages allow the updating of basic maps. It is estimated that such an update of basic maps based on UAV imagery reduces processing time by approx. 40%.

  17. Accuracy assessment of vegetation community maps generated by aerial photography interpretation: perspective from the tropical savanna, Australia

    NASA Astrophysics Data System (ADS)

    Lewis, Donna L.; Phinn, Stuart

    2011-01-01

    Aerial photography interpretation is the most common mapping technique in the world. However, unlike an algorithm-based classification of satellite imagery, accuracy of aerial photography interpretation generated maps is rarely assessed. Vegetation communities covering an area of 530 km2 on Bullo River Station, Northern Territory, Australia, were mapped using an interpretation of 1:50,000 color aerial photography. Manual stereoscopic line-work was delineated at 1:10,000 and thematic maps generated at 1:25,000 and 1:100,000. Multivariate and intuitive analysis techniques were employed to identify 22 vegetation communities within the study area. The accuracy assessment was based on 50% of a field dataset collected over a 4 year period (2006 to 2009) and the remaining 50% of sites were used for map attribution. The overall accuracy and Kappa coefficient for both thematic maps was 66.67% and 0.63, respectively, calculated from standard error matrices. Our findings highlight the need for appropriate scales of mapping and accuracy assessment of aerial photography interpretation generated vegetation community maps.

  18. Mapping broom snakeweed through image analysis of color-infrared photography and digital imagery.

    PubMed

    Everitt, J H; Yang, C

    2007-11-01

    A study was conducted on a south Texas rangeland area to evaluate aerial color-infrared (CIR) photography and CIR digital imagery combined with unsupervised image analysis techniques to map broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby]. Accuracy assessments performed on computer-classified maps of photographic images from two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 88.3%, respectively; whereas, accuracy assessments performed on classified maps from digital images of the same two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 92.8%, respectively. These results indicate that CIR photography and CIR digital imagery combined with image analysis techniques can be used successfully to map broom snakeweed infestations on south Texas rangelands.

  19. Text Detection, Tracking and Recognition in Video: A Comprehensive Survey.

    PubMed

    Yin, Xu-Cheng; Zuo, Ze-Yu; Tian, Shu; Liu, Cheng-Lin

    2016-04-14

    Intelligent analysis of video data is currently in wide demand because video is a major source of sensory data in our lives. Text is a prominent and direct source of information in video, while recent surveys of text detection and recognition in imagery [1], [2] focus mainly on text extraction from scene images. Here, this paper presents a comprehensive survey of text detection, tracking and recognition in video with three major contributions. First, a generic framework is proposed for video text extraction that uniformly describes detection, tracking, recognition, and their relations and interactions. Second, within this framework, a variety of methods, systems and evaluation protocols of video text extraction are summarized, compared, and analyzed. Existing text tracking techniques, tracking based detection and recognition techniques are specifically highlighted. Third, related applications, prominent challenges, and future directions for video text extraction (especially from scene videos and web videos) are also thoroughly discussed.

  20. Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.

    PubMed

    Zhong, Jiandan; Lei, Tao; Yao, Guangle

    2017-11-24

    Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.

  1. Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks

    PubMed Central

    Zhong, Jiandan; Lei, Tao; Yao, Guangle

    2017-01-01

    Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed. PMID:29186756

  2. Open Skies aerial photography of selected areas in Central America affected by Hurricane Mitch

    USGS Publications Warehouse

    Molnia, Bruce; Hallam, Cheryl A.

    1999-01-01

    Between October 27 and November 1, 1998, Central America was devastated by Hurricane Mitch. Following a humanitarian relief effort, one of the first informational needs was complete aerial photographic coverage of the storm ravaged areas so that the governments of the affected countries, the U.S. agencies planning to provide assistance, and the international relief community could come to the aid of the residents of the devastated area. Between December 4 and 19, 1998 an Open Skies aircraft conducted five successful missions and obtained more than 5,000 high-resolution aerial photographs and more than 15,000 video images. The aerial data are being used by the Reconstruction Task Force and many others who are working to begin rebuilding and to help reduce the risk of future destruction.

  3. Bird's-Eye View of Sampling Sites: Using Unmanned Aerial Vehicles to Make Chemistry Fieldwork Videos

    ERIC Educational Resources Information Center

    Fung, Fun Man; Watts, Simon Francis

    2017-01-01

    Drones, unmanned aerial vehicles (UAVs), usually helicopters or airplanes, are commonly used for warfare, aerial surveillance, and recreation. In recent years, drones have become more accessible to the public as a platform for photography. In this report, we explore the use of drones as a new technological filming tool to enhance student learning…

  4. Mapping Urban Ecosystem Services Using High Resolution Aerial Photography

    NASA Astrophysics Data System (ADS)

    Pilant, A. N.; Neale, A.; Wilhelm, D.

    2010-12-01

    Ecosystem services (ES) are the many life-sustaining benefits we receive from nature: e.g., clean air and water, food and fiber, cultural-aesthetic-recreational benefits, pollination and flood control. The ES concept is emerging as a means of integrating complex environmental and economic information to support informed environmental decision making. The US EPA is developing a web-based National Atlas of Ecosystem Services, with a component for urban ecosystems. Currently, the only wall-to-wall, national scale land cover data suitable for this analysis is the National Land Cover Data (NLCD) at 30 m spatial resolution with 5 and 10 year updates. However, aerial photography is acquired at higher spatial resolution (0.5-3 m) and more frequently (1-5 years, typically) for most urban areas. Land cover was mapped in Raleigh, NC using freely available USDA National Agricultural Imagery Program (NAIP) with 1 m ground sample distance to test the suitability of aerial photography for urban ES analysis. Automated feature extraction techniques were used to extract five land cover classes, and an accuracy assessment was performed using standard techniques. Results will be presented that demonstrate applications to mapping ES in urban environments: greenways, corridors, fragmentation, habitat, impervious surfaces, dark and light pavement (urban heat island). Automated feature extraction results mapped over NAIP color aerial photograph. At this scale, we can look at land cover and related ecosystem services at the 2-10 m scale. Small features such as individual trees and sidewalks are visible and mappable. Classified aerial photo of Downtown Raleigh NC Red: impervious surface Dark Green: trees Light Green: grass Tan: soil

  5. The use of remote sensing imagery for environmental land use and flood hazard mapping

    NASA Technical Reports Server (NTRS)

    Mouat, D. A.; Miller, D. A.; Foster, K. E.

    1976-01-01

    Flood hazard maps have been constructed for Graham, Yuma, and Yavapai Counties in Arizona using remote sensing techniques. Watershed maps of priority areas were selected on the basis of their interest to the county planning staff and represented areas of imminent or ongoing development and those known to be subject to inundation by storm runoff. Landsat color infrared imagery at scales of 1:1,000,000, 1:500,000, and 1:250,000 was used together with high-altitude aerial photography at scales of 1:120,000 and 1:60,000 to determine drainage patterns and erosional features, soil type, and the extent and type of ground cover. The satellite imagery was used in the form of 70 mm chips for enhancement in a color additive viewer and in all available enlargement modes. Field checking served as the main backup to the interpretations. Areas with high susceptibility to flooding were determined with a high level of confidence from the remotely sensed imagery.

  6. Validation of Land Cover Maps Utilizing Astronaut Acquired Imagery

    NASA Technical Reports Server (NTRS)

    Estes, John E.; Gebelein, Jennifer

    1999-01-01

    This report is produced in accordance with the requirements outlined in the NASA Research Grant NAG9-1032 titled "Validation of Land Cover Maps Utilizing Astronaut Acquired Imagery". This grant funds the Remote Sensing Research Unit of the University of California, Santa Barbara. This document summarizes the research progress and accomplishments to date and describes current on-going research activities. Even though this grant has technically expired, in a contractual sense, work continues on this project. Therefore, this summary will include all work done through and 5 May 1999. The principal goal of this effort is to test the accuracy of a sub-regional portion of an AVHRR-based land cover product. Land cover mapped to three different classification systems, in the southwestern United States, have been subjected to two specific accuracy assessments. One assessment utilizing astronaut acquired photography, and a second assessment employing Landsat Thematic Mapper imagery, augmented in some cases, high aerial photography. Validation of these three land cover products has proceeded using a stratified sampling methodology. We believe this research will provide an important initial test of the potential use of imagery acquired from Shuttle and ultimately the International Space Station (ISS) for the operational validation of the Moderate Resolution Imaging Spectrometer (MODIS) land cover products.

  7. Individual tree detection from Unmanned Aerial Vehicle (UAV) derived canopy height model in an open canopy mixed conifer forest

    Treesearch

    Midhun Mohan; Carlos Alberto Silva; Carine Klauberg; Prahlad Jat; Glenn Catts; Adrian Cardil; Andrew Thomas Hudak; Mahendra Dia

    2017-01-01

    Advances in Unmanned Aerial Vehicle (UAV) technology and data processing capabilities have made it feasible to obtain high-resolution imagery and three dimensional (3D) data which can be used for forest monitoring and assessing tree attributes. This study evaluates the applicability of low consumer grade cameras attached to UAVs and structure-from-motion (SfM)...

  8. February 1994 ice storm: forest resource damage assessment in northern Mississippi

    Treesearch

    Dennis M. Jacobs

    2000-01-01

    During February 8­11, 1994, a severe winter storm moved from Texas and Oklahoma to the mid-Atlantic depositing in northern Mississippi a major ice accumulation of 3 to 6 inches. An assessment of forest resource damage was initiated immediately after the storm by performing an airborne video mission to acquire aerial imagery linked to global positioning coordinates....

  9. Proceedings of the 2004 High Spatial Resolution Commercial Imagery Workshop

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Topics covered include: NASA Applied Sciences Program; USGS Land Remote Sensing: Overview; QuickBird System Status and Product Overview; ORBIMAGE Overview; IKONOS 2004 Calibration and Validation Status; OrbView-3 Spatial Characterization; On-Orbit Modulation Transfer Function (MTF) Measurement of QuickBird; Spatial Resolution Characterization for QuickBird Image Products 2003-2004 Season; Image Quality Evaluation of QuickBird Super Resolution and Revisit of IKONOS: Civil and Commercial Application Project (CCAP); On-Orbit System MTF Measurement; QuickBird Post Launch Geopositional Characterization Update; OrbView-3 Geometric Calibration and Geopositional Accuracy; Geopositional Statistical Methods; QuickBird and OrbView-3 Geopositional Accuracy Assessment; Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images; Laboratory Measurement of Bidirectional Reflectance of Radiometric Tarps; Stennis Space Center Verification and Validation Capabilities; Joint Agency Commercial Imagery Evaluation (JACIE) Team; Adjacency Effects in High Resolution Imagery; Effect of Pulse Width vs. GSD on MTF Estimation; Camera and Sensor Calibration at the USGS; QuickBird Geometric Verification; Comparison of MODTRAN to Heritage-based Results in Vicarious Calibration at University of Arizona; Using Remotely Sensed Imagery to Determine Impervious Surface in Sioux Falls, South Dakota; Estimating Sub-Pixel Proportions of Sagebrush with a Regression Tree; How Do YOU Use the National Land Cover Dataset?; The National Map Hazards Data Distribution System; Recording a Troubled World; What Does This-Have to Do with This?; When Can a Picture Save a Thousand Homes?; InSAR Studies of Alaska Volcanoes; Earth Observing-1 (EO-1) Data Products; Improving Access to the USGS Aerial Film Collections: High Resolution Scanners; Improving Access to the USGS Aerial Film Collections: Phoenix Digitizing System Product Distribution; System and Product Characterization: Issues Approach

  10. Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue

    2015-04-01

    Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.

  11. High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.

    2015-10-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment, particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the catchment area as model input. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increases as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data are often unavailable. Modern unmanned aerial vehicles (UAVs) allow one to acquire high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility of deriving high-resolution imperviousness maps for urban areas from UAV imagery and of using this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is proposed and evaluated in a state-of-the-art urban drainage modelling exercise. In a real-life case study (Lucerne, Switzerland), we compare imperviousness maps generated using a fixed-wing consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their overall accuracy, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyse the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak

  12. Low-cost lightweight airborne laser-based sensors for pipeline leak detection and reporting

    NASA Astrophysics Data System (ADS)

    Frish, Michael B.; Wainner, Richard T.; Laderer, Matthew C.; Allen, Mark G.; Rutherford, James; Wehnert, Paul; Dey, Sean; Gilchrist, John; Corbi, Ron; Picciaia, Daniele; Andreussi, Paolo; Furry, David

    2013-05-01

    Laser sensing enables aerial detection of natural gas pipeline leaks without need to fly through a hazardous gas plume. This paper describes adaptations of commercial laser-based methane sensing technology that provide relatively low-cost lightweight and battery-powered aerial leak sensors. The underlying technology is near-infrared Standoff Tunable Diode Laser Absorption Spectroscopy (sTDLAS). In one configuration, currently in commercial operation for pipeline surveillance, sTDLAS is combined with automated data reduction, alerting, navigation, and video imagery, integrated into a single-engine single-pilot light fixed-wing aircraft or helicopter platform. In a novel configuration for mapping landfill methane emissions, a miniaturized ultra-lightweight sTDLAS sensor flies aboard a small quad-rotor unmanned aerial vehicle (UAV).

  13. 3D reconstruction of a tree stem using video images and pulse distances

    Treesearch

    N. E. Clark

    2002-01-01

    This paper demonstrates how a 3D tree stem model can be reconstructed using video imagery combined with laser pulse distance measurements. Perspective projection is used to place the data collected with the portable video laser-rangefinding device into a real world coordinate system. This hybrid methodology uses a relatively small number of range measurements (compared...

  14. Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.

    2017-02-01

    In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.

  15. Assessing the performance of aerial image point cloud and spectral metrics in predicting boreal forest canopy cover

    NASA Astrophysics Data System (ADS)

    Melin, M.; Korhonen, L.; Kukkonen, M.; Packalen, P.

    2017-07-01

    Canopy cover (CC) is a variable used to describe the status of forests and forested habitats, but also the variable used primarily to define what counts as a forest. The estimation of CC has relied heavily on remote sensing with past studies focusing on satellite imagery as well as Airborne Laser Scanning (ALS) using light detection and ranging (lidar). Of these, ALS has been proven highly accurate, because the fraction of pulses penetrating the canopy represents a direct measurement of canopy gap percentage. However, the methods of photogrammetry can be applied to produce point clouds fairly similar to airborne lidar data from aerial images. Currently there is little information about how well such point clouds measure canopy density and gaps. The aim of this study was to assess the suitability of aerial image point clouds for CC estimation and compare the results with those obtained using spectral data from aerial images and Landsat 5. First, we modeled CC for n = 1149 lidar plots using field-measured CCs and lidar data. Next, this data was split into five subsets in north-south direction (y-coordinate). Finally, four CC models (AerialSpectral, AerialPointcloud, AerialCombi (spectral + pointcloud) and Landsat) were created and they were used to predict new CC values to the lidar plots, subset by subset, using five-fold cross validation. The Landsat and AerialSpectral models performed with RMSEs of 13.8% and 12.4%, respectively. AerialPointcloud model reached an RMSE of 10.3%, which was further improved by the inclusion of spectral data; RMSE of the AerialCombi model was 9.3%. We noticed that the aerial image point clouds managed to describe only the outermost layer of the canopy and missed the details in lower canopy, which was resulted in weak characterization of the total CC variation, especially in the tails of the data.

  16. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  17. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  18. Tile prediction schemes for wide area motion imagery maps in GIS

    NASA Astrophysics Data System (ADS)

    Michael, Chris J.; Lin, Bruce Y.

    2017-11-01

    Wide-area surveillance, traffic monitoring, and emergency management are just several of many applications benefiting from the incorporation of Wide-Area Motion Imagery (WAMI) maps into geographic information systems. Though the use of motion imagery as a GIS base map via the Web Map Service (WMS) standard is not a new concept, effectively streaming imagery is particularly challenging due to its large scale and the multidimensionally interactive nature of clients that use WMS. Ineffective streaming from a server to one or more clients can unnecessarily overwhelm network bandwidth and cause frustratingly large amounts of latency in visualization to the user. Seamlessly streaming WAMI through GIS requires good prediction to accurately guess the tiles of the video that will be traversed in the near future. In this study, we present an experimental framework for such prediction schemes by presenting a stochastic interaction model that represents a human user's interaction with a GIS video map. We then propose several algorithms by which the tiles of the stream may be predicted. Results collected both within the experimental framework and using human analyst trajectories show that, though each algorithm thrives under certain constraints, the novel Markovian algorithm yields the best results overall. Furthermore, we make the argument that the proposed experimental framework is sufficient for the study of these prediction schemes.

  19. Towards the Automatic Detection of Pre-Existing Termite Mounds through UAS and Hyperspectral Imagery.

    PubMed

    Sandino, Juan; Wooler, Adam; Gonzalez, Felipe

    2017-09-24

    The increased technological developments in Unmanned Aerial Vehicles (UAVs) combined with artificial intelligence and Machine Learning (ML) approaches have opened the possibility of remote sensing of extensive areas of arid lands. In this paper, a novel approach towards the detection of termite mounds with the use of a UAV, hyperspectral imagery, ML and digital image processing is intended. A new pipeline process is proposed to detect termite mounds automatically and to reduce, consequently, detection times. For the classification stage, several ML classification algorithms' outcomes were studied, selecting support vector machines as the best approach for their role in image classification of pre-existing termite mounds. Various test conditions were applied to the proposed algorithm, obtaining an overall accuracy of 68%. Images with satisfactory mound detection proved that the method is "resolution-dependent". These mounds were detected regardless of their rotation and position in the aerial image. However, image distortion reduced the number of detected mounds due to the inclusion of a shape analysis method in the object detection phase, and image resolution is still determinant to obtain accurate results. Hyperspectral imagery demonstrated better capabilities to classify a huge set of materials than implementing traditional segmentation methods on RGB images only.

  20. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1988-01-19

    approach for the analysis of aerial images. In this approach image analysis is performed ast three levels of abstraction, namely iconic or low-level... image analysis , symbolic or medium-level image analysis , and semantic or high-level image analysis . Domain dependent knowledge about prototypical urban

  1. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  2. Monitoring Seabirds and Marine Mammals by Georeferenced Aerial Photography

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Weidauer, A.; Coppack, T.

    2016-06-01

    The assessment of anthropogenic impacts on the marine environment is challenged by the accessibility, accuracy and validity of biogeographical information. Offshore wind farm projects require large-scale ecological surveys before, during and after construction, in order to assess potential effects on the distribution and abundance of protected species. The robustness of site-specific population estimates depends largely on the extent and design of spatial coverage and the accuracy of the applied census technique. Standard environmental assessment studies in Germany have so far included aerial visual surveys to evaluate potential impacts of offshore wind farms on seabirds and marine mammals. However, low flight altitudes, necessary for the visual classification of species, disturb sensitive bird species and also hold significant safety risks for the observers. Thus, aerial surveys based on high-resolution digital imagery, which can be carried out at higher (safer) flight altitudes (beyond the rotor-swept zone of the wind turbines) have become a mandatory requirement, technically solving the problem of distant-related observation bias. A purpose-assembled imagery system including medium-format cameras in conjunction with a dedicated geo-positioning platform delivers series of orthogonal digital images that meet the current technical requirements of authorities for surveying marine wildlife at a comparatively low cost. At a flight altitude of 425 m, a focal length of 110 mm, implemented forward motion compensation (FMC) and exposure times ranging between 1/1600 and 1/1000 s, the twin-camera system generates high quality 16 bit RGB images with a ground sampling distance (GSD) of 2 cm and an image footprint of 155 x 410 m. The image files are readily transferrable to a GIS environment for further editing, taking overlapping image areas and areas affected by glare into account. The imagery can be routinely screened by the human eye guided by purpose-programmed software

  3. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  4. Habitat Mapping and Classification of the Grand Bay National Estuarine Research Reserve using AISA Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Rose, K.

    2012-12-01

    Habitat mapping and classification provides essential information for land use planning and ecosystem research, monitoring and management. At the Grand Bay National Estuarine Research Reserve (GRDNERR), Mississippi, habitat characterization of the Grand Bay watershed will also be used to develop a decision-support tool for the NERR's managers and state and local partners. Grand Bay NERR habitat units were identified using a combination of remotely sensed imagery, aerial photography and elevation data. Airborne Imaging Spectrometer for Applications (AISA) hyperspectral data, acquired 5 and 6 May 2010, was analyzed and classified using ENVI v4.8 and v5.0 software. The AISA system was configured to return 63 bands of digital imagery data with a spectral range of 400 to 970 nm (VNIR), spectral resolution (bandwidth) at 8.76 nm, and 1 m spatial resolution. Minimum Noise Fraction (MNF) and Inverse Minimum Noise Fraction were applied to the data prior to using Spectral Angle Mapper ([SAM] supervised) and ISODATA (unsupervised) classification techniques. The resulting class image was exported to ArcGIS 10.0 and visually inspected and compared with the original imagery as well as auxiliary datasets to assist in the attribution of habitat characteristics to the spectral classes, including: National Agricultural Imagery Program (NAIP) aerial photography, Jackson County, MS, 2010; USFWS National Wetlands Inventory, 2007; an existing GRDNERR habitat map (2004), SAV (2009) and salt panne (2002-2003) GIS produced by GRDNERR; and USACE lidar topo-bathymetry, 2005. A field survey to validate the map's accuracy will take place during the 2012 summer season. ENVI's Random Sample generator was used to generate GIS points for a ground-truth survey. The broad range of coastal estuarine habitats and geomorphological features- many of which are transitional and vulnerable to environmental stressors- that have been identified within the GRDNERR point to the value of the Reserve for

  5. Locating inputs of freshwater to Lynch Cove, Hood Canal, Washington, using aerial infrared photography

    USGS Publications Warehouse

    Sheibley, Rich W.; Josberger, Edward G.; Chickadel, Chris

    2010-01-01

    The input of freshwater and associated nutrients into Lynch Cove and lower Hood Canal (fig. 1) from sources such as groundwater seeps, small streams, and ephemeral creeks may play a major role in the nutrient loading and hydrodynamics of this low dissolved-oxygen (hypoxic) system. These disbursed sources exhibit a high degree of spatial variability. However, few in-situ measurements of groundwater seepage rates and nutrient concentrations are available and thus may not represent adequately the large spatial variability of groundwater discharge in the area. As a result, our understanding of these processes and their effect on hypoxic conditions in Hood Canal is limited. To determine the spatial variability and relative intensity of these sources, the U.S. Geological Survey Washington Water Science Center collaborated with the University of Washington Applied Physics Laboratory to obtain thermal infrared (TIR) images of the nearshore and intertidal regions of Lynch Cove at or near low tide. In the summer, cool freshwater discharges from seeps and streams, flows across the exposed, sun-warmed beach, and out on the warm surface of the marine water. These temperature differences are readily apparent in aerial thermal infrared imagery that we acquired during the summers of 2008 and 2009. When combined with co-incident video camera images, these temperature differences allow identification of the location, the type, and the relative intensity of the sources.

  6. Computer vision-based technologies and commercial best practices for the advancement of the motion imagery tradecraft

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Capel, David; Srinivasan, James

    2014-06-01

    Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.

  7. Virtually transparent epidermal imagery (VTEI): on new approaches to in vivo wireless high-definition video and image processing.

    PubMed

    Anderson, Adam L; Lin, Bingxiong; Sun, Yu

    2013-12-01

    This work first overviews a novel design, and prototype implementation, of a virtually transparent epidermal imagery (VTEI) system for laparo-endoscopic single-site (LESS) surgery. The system uses a network of multiple, micro-cameras and multiview mosaicking to obtain a panoramic view of the surgery area. The prototype VTEI system also projects the generated panoramic view on the abdomen area to create a transparent display effect that mimics equivalent, but higher risk, open-cavity surgeries. The specific research focus of this paper is on two important aspects of a VTEI system: 1) in vivo wireless high-definition (HD) video transmission and 2) multi-image processing-both of which play key roles in next-generation systems. For transmission and reception, this paper proposes a theoretical wireless communication scheme for high-definition video in situations that require extremely small-footprint image sensors and in zero-latency applications. In such situations the typical optimized metrics in communication schemes, such as power and data rate, are far less important than latency and hardware footprint that absolutely preclude their use if not satisfied. This work proposes the use of a novel Frequency-Modulated Voltage-Division Multiplexing (FM-VDM) scheme where sensor data is kept analog and transmitted via "voltage-multiplexed" signals that are also frequency-modulated. Once images are received, a novel Homographic Image Mosaicking and Morphing (HIMM) algorithm is proposed to stitch images from respective cameras, that also compensates for irregular surfaces in real-time, into a single cohesive view of the surgical area. In VTEI, this view is then visible to the surgeon directly on the patient to give an "open cavity" feel to laparoscopic procedures.

  8. Landscape Response to the 1980 Eruption of Mount St. Helens: Using Historical Aerial Photography to Measure Surface Change

    NASA Astrophysics Data System (ADS)

    Sweeney, K.; Major, J. J.

    2016-12-01

    Advances in structure-from-motion (SfM) photogrammetry and point cloud comparison have fueled a proliferation of studies using modern imagery to monitor geomorphic change. These techniques also have obvious applications for reconstructing historical landscapes from vertical aerial imagery, but known challenges include insufficient photo overlap, systematic "doming" induced by photo-spacing regularity, missing metadata, and lack of ground control. Aerial imagery of landscape change in the North Fork Toutle River (NFTR) following the 1980 eruption of Mount St. Helens is a prime dataset to refine methodologies. In particular, (1) 14-μm film scans are available for 1:9600 images at 4-month intervals from 1980 - 1986, (2) the large magnitude of landscape change swamps systematic error and noise, and (3) stable areas (primary deposit features, roads, etc.) provide targets for both ground control and matching to modern lidar. Using AgiSoft PhotoScan, we create digital surface models from the NFTR imagery and examine how common steps in SfM workflows affect results. Tests of scan quality show high-resolution, professional film scans are superior to office scans of paper prints, reducing spurious points related to scan infidelity and image damage. We confirm earlier findings that cropping and rotating images improves point matching and the final surface model produced by the SfM algorithm. We demonstrate how the iterative closest point algorithm, implemented in CloudCompare and using modern lidar as a reference dataset, can serve as an adequate substitute for absolute ground control. Elevation difference maps derived from our surface models of Mount St. Helens show patterns consistent with field observations, including channel avulsion and migration, though systematic errors remain. We suggest that subtracting an empirical function fit to the long-wavelength topographic signal may be one avenue for correcting systematic error in similar datasets.

  9. Understanding successful and unsuccessful landings of aerial maneuver variations in professional surfing.

    PubMed

    Forsyth, J R; Riddiford-Harland, D L; Whitting, J W; Sheppard, J M; Steele, J R

    2018-05-01

    Although performing aerial maneuvers can increase wave score and winning potential in competitive surfing, the critical features underlying successful aerial performance have not been systematically investigated. This study aimed to analyze highly skilled aerial maneuver performance and to identify the critical features associated with successful or unsuccessful landing. Using video recordings of the World Surf League's Championship Tour, every aerial performed during the quarterfinal, semifinal, and final heats from the 11 events in the 2015 season was viewed. From this, 121 aerials were identified with the Frontside Air (n = 15) and Frontside Air Reverse (n = 67) being selected to be qualitatively assessed. Using chi-squared analyses, a series of key critical features, including landing over the center of the surfboard (FS Air χ 2  = 14.00, FS Air Reverse χ 2  = 26.61; P < .001) and landing with the lead ankle in dorsiflexion (FS Air χ 2  = 3.90, FS Air Reverse χ 2  = 13.64; P < .05), were found to be associated with successful landings. These critical features help surfers land in a stable position, while maintaining contact with the surfboard. The results of this study provide coaches with evidence to adjust the technique of their athletes to improve their winning potential. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Photo-acoustic and video-acoustic methods for sensing distant sound sources

    NASA Astrophysics Data System (ADS)

    Slater, Dan; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However

  11. The application of ERTS imagery to monitoring Arctic sea ice. [mapping ice in Bering Sea, Beaufort Sea, Canadian Archipelago, and Greenland Sea

    NASA Technical Reports Server (NTRS)

    Barnes, J. C. (Principal Investigator); Bowley, C. J.

    1974-01-01

    The author has identified the following significant results. Because of the effect of sea ice on the heat balance of the Arctic and because of the expanding economic interest in arctic oil and minerals, extensive monitoring and further study of sea ice is required. The application of ERTS data for mapping ice is evaluated for several arctic areas, including the Bering Sea, the eastern Beaufort Sea, parts of the Canadian Archipelago, and the Greenland Sea. Interpretive techniques are discussed, and the scales and types of ice features that can be detected are described. For the Bering Sea, a sample of ERTS-1 imagery is compared with visual ice reports and aerial photography from the NASA CV-990 aircraft. The results of the investigation demonstrate that ERTS-1 imagery has substantial practical application for monitoring arctic sea ice. Ice features as small as 80-100 m in width can be detected, and the combined use of the visible and near-IR imagery is a powerful tool for identifying ice types. Sequential ERTS-1 observations at high latitudes enable ice deformations and movements to be mapped. Ice conditions in the Bering Sea during early March depicted in ERTS-1 images are in close agreement with aerial ice observations and photographs.

  12. Applicability of Unmanned Aerial Vehicles in Research on Aeolian Processes

    NASA Astrophysics Data System (ADS)

    Algimantas, Česnulevičius; Artūras, Bautrėnas; Linas, Bevainis; Donatas, Ovodas; Kęstutis, Papšys

    2018-02-01

    Surface dynamics and instabilities are characteristic of aeolian formation. The method of surface comparison is regarded as the most appropriate one for evaluation of the intensity of aeolian processes and the amount of transported sand. The data for surface comparison can be collected by topographic survey measurements and using unmanned aerial vehicles. Time cost for relief microform fixation and measurement executing topographic survey are very high. The method of unmanned aircraft aerial photographs fixation also encounters difficulties because there are no stable clear objects and contours that enable to link aerial photographs, to determine the boundaries of captured territory and to ensure the accuracy of surface measurements. Creation of stationary anchor points is irrational due to intense sand accumulation and deflation in different climate seasons. In September 2015 and in April 2016 the combined methodology was applied for evaluation of intensity of aeolian processes in the Curonian Spit. Temporary signs (marks) were installed on the surface, coordinates of the marks were fixed using GPS and then flight of unmanned aircraft was conducted. The fixed coordinates of marks ensure the accuracy of measuring aerial imagery and the ability to calculate the possible corrections. This method was used to track and measure very small (micro-rank) relief forms (5-10 cm height and 10-20 cm length). Using this method morphometric indicators of micro-terraces caused by sand dunes pressure to gytia layer were measured in a non-contact way. An additional advantage of the method is the ability to accurately link the repeated measurements. The comparison of 3D terrain models showed sand deflation and accumulation areas and quantitative changes in the terrain very clearly.

  13. A comparison of LANDSAT TM to MSS imagery for detecting submerged aquatic vegetation in lower Chesapeake Bay

    NASA Technical Reports Server (NTRS)

    Ackleson, S. G.; Klemas, V.

    1985-01-01

    LANDSAT Thematic Mapper (TM) and Multispectral Scanner (MSS) imagery generated simultaneously over Guinea Marsh, Virginia, are assessed in the ability to detect submerged aquatic, bottom-adhering plant canopies (SAV). An unsupervised clustering algorithm is applied to both image types and the resulting classifications compared to SAV distributions derived from color aerial photography. Class confidence and accuracy are first computed for all water areas and then only shallow areas where water depth is less than 6 feet. In both the TM and MSS imagery, masking water areas deeper than 6 ft. resulted in greater classification accuracy at confidence levels greater than 50%. Both systems perform poorly in detecting SAV with crown cover densities less than 70%. On the basis of the spectral resolution, radiometric sensitivity, and location of visible bands, TM imagery does not offer a significant advantage over MSS data for detecting SAV in Lower Chesapeake Bay. However, because the TM imagery represents a higher spatial resolution, smaller SAV canopies may be detected than is possible with MSS data.

  14. Object-based land-cover classification for metropolitan Phoenix, Arizona, using aerial photography

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxiao; Myint, Soe W.; Zhang, Yujia; Galletti, Chritopher; Zhang, Xiaoxiang; Turner, Billie L.

    2014-12-01

    Detailed land-cover mapping is essential for a range of research issues addressed by the sustainability and land system sciences and planning. This study uses an object-based approach to create a 1 m land-cover classification map of the expansive Phoenix metropolitan area through the use of high spatial resolution aerial photography from National Agricultural Imagery Program. It employs an expert knowledge decision rule set and incorporates the cadastral GIS vector layer as auxiliary data. The classification rule was established on a hierarchical image object network, and the properties of parcels in the vector layer were used to establish land cover types. Image segmentations were initially utilized to separate the aerial photos into parcel sized objects, and were further used for detailed land type identification within the parcels. Characteristics of image objects from contextual and geometrical aspects were used in the decision rule set to reduce the spectral limitation of the four-band aerial photography. Classification results include 12 land-cover classes and subclasses that may be assessed from the sub-parcel to the landscape scales, facilitating examination of scale dynamics. The proposed object-based classification method provides robust results, uses minimal and readily available ancillary data, and reduces computational time.

  15. Aerial Explorers

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Greg; Ippolito, Corey

    2005-01-01

    This paper presents recent results from a mission architecture study of planetary aerial explorers. In this study, several mission scenarios were developed in simulation and evaluated on success in meeting mission goals. This aerial explorer mission architecture study is unique in comparison with previous Mars airplane research activities. The study examines how aerial vehicles can find and gain access to otherwise inaccessible terrain features of interest. The aerial explorer also engages in a high-level of (indirect) surface interaction, despite not typically being able to takeoff and land or to engage in multiple flights/sorties. To achieve this goal, a new mission paradigm is proposed: aerial explorers should be considered as an additional element in the overall Entry, Descent, Landing System (EDLS) process. Further, aerial vehicles should be considered primarily as carrier/utility platforms whose purpose is to deliver air-deployed sensors and robotic devices, or symbiotes, to those high-value terrain features of interest.

  16. Aerospace video imaging systems for rangeland management

    NASA Technical Reports Server (NTRS)

    Everitt, J. H.; Escobar, D. E.; Richardson, A. J.; Lulla, K.

    1990-01-01

    This paper presents an overview on the application of airborne video imagery (VI) for assessment of rangeland resources. Multispectral black-and-white video with visible/NIR sensitivity; color-IR, normal color, and black-and-white MIR; and thermal IR video have been used to detect or distinguish among many rangeland and other natural resource variables such as heavy grazing, drought-stressed grass, phytomass levels, burned areas, soil salinity, plant communities and species, and gopher and ant mounds. The digitization and computer processing of VI have also been demonstrated. VI does not have the detailed resolution of film, but these results have shown that it has considerable potential as an applied remote sensing tool for rangeland management. In the future, spaceborne VI may provide additional data for monitoring and management of rangelands.

  17. A Video Game-Based Framework for Analyzing Human-Robot Interaction: Characterizing Interface Design in Real-Time Interactive Multimedia Applications

    DTIC Science & Technology

    2006-01-01

    segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive...multimedia applications in general and HRI in particular. We provide examples of using the components in both the video game and the Unmanned Aerial

  18. Estimating Belowground Carbon Stocks in Isolated Wetlands of the Northern Everglades Watershed, Central Florida, Using Ground Penetrating Radar and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    McClellan, Matthew; Comas, Xavier; Benscoter, Brian; Hinkle, Ross; Sumner, David

    2017-11-01

    Peat soils store a large fraction of the global soil carbon (C) pool and comprise 95% of wetland C stocks. While isolated freshwater wetlands in temperate and tropical biomes account for more than 20% of the global peatland C stock, most studies of wetland soil C have occurred in expansive peatlands in northern boreal and subarctic biomes. Furthermore, the contribution of small depressional wetlands in comparison to larger wetland systems in these environments is very uncertain. Given the fact that these wetlands are numerous and variable in terms of their internal geometry, innovative methods are needed for properly estimating belowground C stocks and their overall C contribution to the landscape. In this study, we use a combination of ground penetrating radar (GPR), aerial imagery, and direct measurements (coring) in conjunction with C core analysis to develop a relation between C stock and surface area, and estimate the contribution of subtropical depressional wetlands to the total C stock of pine flatwoods at the Disney Wilderness Preserve (DWP), Florida. Additionally, GPR surveys were able to image collapse structures underneath the peat basin of depressional wetlands, depicting lithological controls on the formation of depressional wetlands at the DWP. Results indicate the importance of depressional wetlands as critical contributors to the landscape C budget at the DWP and the potential of GPR-based approaches for (1) rapidly and noninvasively estimating the contribution of depressional wetlands to regional C stocks and (2) evaluating the formational processes of depressional wetlands.

  19. Detection of Single Standing Dead Trees from Aerial Color Infrared Imagery by Segmentation with Shape and Intensity Priors

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Yao, W.; Heurich, M.; Krzystek, P.; Stilla, U.

    2015-03-01

    Standing dead trees, known as snags, are an essential factor in maintaining biodiversity in forest ecosystems. Combined with their role as carbon sinks, this makes for a compelling reason to study their spatial distribution. This paper presents an integrated method to detect and delineate individual dead tree crowns from color infrared aerial imagery. Our approach consists of two steps which incorporate statistical information about prior distributions of both the image intensities and the shapes of the target objects. In the first step, we perform a Gaussian Mixture Model clustering in the pixel color space with priors on the cluster means, obtaining up to 3 components corresponding to dead trees, living trees, and shadows. We then refine the dead tree regions using a level set segmentation method enriched with a generative model of the dead trees' shape distribution as well as a discriminative model of their pixel intensity distribution. The iterative application of the statistical shape template yields the set of delineated dead crowns. The prior information enforces the consistency of the template's shape variation with the shape manifold defined by manually labeled training examples, which makes it possible to separate crowns located in close proximity and prevents the formation of large crown clusters. Also, the statistical information built into the segmentation gives rise to an implicit detection scheme, because the shape template evolves towards an empty contour if not enough evidence for the object is present in the image. We test our method on 3 sample plots from the Bavarian Forest National Park with reference data obtained by manually marking individual dead tree polygons in the images. Our results are scenario-dependent and range from a correctness/completeness of 0.71/0.81 up to 0.77/1, with an average center-of-gravity displacement of 3-5 pixels between the detected and reference polygons.

  20. ESIAC: A data products system for ERTS imagery (time-lapse viewing and measuring)

    NASA Technical Reports Server (NTRS)

    Evans, W. E.; Serebreny, S. M.

    1974-01-01

    An Electronic Satellite Image Analysis Console (ESIAC) has been developed for visual analysis and objective measurement of earth resources imagery. The system is being employed to process imagery for use by USGS investigators in several different disciplines studying dynamic hydrologic conditions. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. The unique feature of the system is the capability to time-lapse the ERTS imagery and/or analytic displays of the imagery. Data products have included quantitative measurements of distances and areas, brightness profiles, and movie loops of selected themes. The applications of these data products are identified and include such diverse problem areas as measurement of snowfield extent, sediment plumes from estuary dicharge, playa inventory, phreatophyte and other vegetation changes. A comparative ranking of the electronic system in terms of accuracy, cost effectiveness and data output shows it to be a viable means of data analysis.

  1. Identification of disrupted surfaces due to military activity at the Ft. Irwin National Training Center: An aerial photograph and satellite image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, L.E.; Marsh, S.E.; Lee, C.

    1996-07-01

    Concern for environmental management of our natural resources is most often focused on the anthropogenic impacts placed upon these resources. Desert landscapes, in particular, are fragile environments, and minimal stresses on surficial materials can greatly increase the rate and character of erosional responses. The National Training Center, Ft. Irwin, located in the middle of the Mojave Desert, California, provides an isolated study area of intense ORV activity occurring over a 50-year period. Geomorphic surfaces, and surficial disruption from two study sites within the Ft. Irwin area were mapped from 1947, 1:28,400, and 1993 1:12,000 black and white aerial photographs. Severalmore » field checks were conducted to verify this mapping. However, mapping from black and white aerial photography relies heavily on tonal differences, patterns, and morphological criteria. Satellite imagery, sensitive to changes in mineralogy, can help improve the ability to distinguish geomorphic units in desert regions. In order to assess both the extent of disrupted surfaces and the surficial geomorphology discemable from satellite imagery, analysis was done on SPOT panchromatic and Landsat Thematic Mapper (TM) multispectral imagery acquired during the spring of 1987 and 1993. The resulting classified images provide a clear indication of the capabilities of the satellite data to aid in the delineation of disrupted geomorphic surfaces.« less

  2. Adolescents’ exposure to tobacco and alcohol content in YouTube music videos

    PubMed Central

    Murray, Rachael; Lewis, Sarah; Leonardi‐Bee, Jo; Dockrell, Martin; Britton, John

    2015-01-01

    Abstract Aims To quantify tobacco and alcohol content, including branding, in popular contemporary YouTube music videos; and measure adolescent exposure to such content. Design Ten‐second interval content analysis of alcohol, tobacco or electronic cigarette imagery in all UK Top 40 YouTube music videos during a 12‐week period in 2013/14; on‐line national survey of adolescent viewing of the 32 most popular high‐content videos. Setting Great Britain. Participants A total of 2068 adolescents aged 11–18 years who completed an on‐line survey. Measurements Occurrence of alcohol, tobacco and electronic cigarette use, implied use, paraphernalia or branding in music videos and proportions and estimated numbers of adolescents who had watched sampled videos. Findings Alcohol imagery appeared in 45% [95% confidence interval (CI) = 33–51%] of all videos, tobacco in 22% (95% CI = 13–27%) and electronic cigarettes in 2% (95% CI = 0–4%). Alcohol branding appeared in 7% (95% CI = 2–11%) of videos, tobacco branding in 4% (95% CI = 0–7%) and electronic cigarettes in 1% (95% CI = 0–3%). The most frequently observed alcohol, tobacco and electronic cigarette brands were, respectively, Absolut Tune, Marlboro and E‐Lites. At least one of the 32 most popular music videos containing alcohol or tobacco content had been seen by 81% (95% CI = 79%, 83%) of adolescents surveyed, and of these 87% (95% CI = 85%, 89%) had re‐watched at least one video. The average number of videos seen was 7.1 (95% CI = 6.8, 7.4). Girls were more likely to watch and also re‐watch the videos than boys, P < 0.001. Conclusions Popular YouTube music videos watched by a large number of British adolescents, particularly girls, include significant tobacco and alcohol content, including branding. PMID:25516167

  3. Stream network analysis from orbital and suborbital imagery, Colorado River Basin, Texas

    NASA Technical Reports Server (NTRS)

    Baker, V. R. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Orbital SL-2 imagery (earth terrain camera S-190B), received September 5, 1973, was subjected to quantitative network analysis and compared to 7.5 minute topographic mapping (scale: 1/24,000) and U.S.D.A. conventional black and white aerial photography (scale: 1/22,200). Results can only be considered suggestive because detail on the SL-2 imagery was badly obscured by heavy cloud cover. The upper Bee Creek basin was chosen for analysis because it appeared in a relatively cloud-free portion of the orbital imagery. Drainage maps were drawn from the three sources digitized into a computer-compatible format, and analyzed by the WATER system computer program. Even at its small scale (1/172,000) and with bad haze the orbital photo showed much drainage detail. The contour-like character of the Glen Rose Formation's resistant limestone units allowed channel definition. The errors in pattern recognition can be attributed to local areas of dense vegetation and to other areas of very high albedo caused by surficial exposure of caliche. The latter effect caused particular difficulty in the determination of drainage divides.

  4. Image Understanding Research and Its Application to Cartography and Computer-Based Analysis of Aerial Imagery

    DTIC Science & Technology

    1983-09-01

    Report Al-TR-346. Artifcial Intelligence Laboratory, Mamachusetts Institute of Tech- niugy. Cambridge, Mmeh mett. June 19 [G.usmn@ A. Gaman-Arenas...Testbed Coordinator, 415/859-4395 Artificial Intelligence Center Computer Science and Technology Division Prepared for: Defense Advanced Research...to support processing of aerial photographs for such military applications as cartography, Intelligence , weapon guidance, and targeting. A key

  5. Reconstructing Buildings with Discontinuities and Roof Overhangs from Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.

    2017-05-01

    This paper proposes a two-stage method for the reconstruction of city buildings with discontinuities and roof overhangs from oriented nadir and oblique aerial images. To model the structures the input data is transformed into a dense point cloud, segmented and filtered with a modified marching cubes algorithm to reduce the positional noise. Assuming a monolithic building the remaining vertices are initially projected onto a 2D grid and passed to RANSAC-based regression and topology analysis to geometrically determine finite wall, ground and roof planes. If this should fail due to the presence of discontinuities the regression will be repeated on a 3D level by traversing voxels within the regularly subdivided bounding box of the building point set. For each cube a planar piece of the current surface is approximated and expanded. The resulting segments get mutually intersected yielding both topological and geometrical nodes and edges. These entities will be eliminated if their distance-based affiliation to the defining point sets is violated leaving a consistent building hull including its structural breaks. To add the roof overhangs the computed polygonal meshes are projected onto the digital surface model derived from the point cloud. Their shapes are offset equally along the edge normals with subpixel accuracy by detecting the zero-crossings of the second-order directional derivative in the gradient direction of the height bitmap and translated back into world space to become a component of the building. As soon as the reconstructed objects are finished the aerial images are further used to generate a compact texture atlas for visualization purposes. An optimized atlas bitmap is generated that allows perspectivecorrect multi-source texture mapping without prior rectification involving a partially parallel placement algorithm. Moreover, the texture atlases undergo object-based image analysis (OBIA) to detect window areas which get reintegrated into the building

  6. Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.

    2016-12-01

    Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.

  7. MAPPING NON-INDIGENOUS EELGRASS ZOSTERA JAPONICA, ASSOCIATED MACROALGAE AND EMERGENT AQUATIC VEGETARIAN HABITATS IN A PACIFIC NORTHWEST ESTUARY USING NEAR-INFRARED COLOR AERIAL PHOTOGRAPHY AND A HYBRID IMAGE CLASSIFICATION TECHNIQUE

    EPA Science Inventory

    We conducted aerial photographic surveys of Oregon's Yaquina Bay estuary during consecutive summers from 1997 through 2001. Imagery was obtained during low tide exposures of intertidal mudflats, allowing use of near-infrared color film to detect and discriminate plant communitie...

  8. Aerial manoeuvrability in wingless gliding ants (Cephalotes atratus)

    PubMed Central

    Yanoviak, Stephen P.; Munk, Yonatan; Kaspari, Mike; Dudley, Robert

    2010-01-01

    In contrast to the patagial membranes of gliding vertebrates, the aerodynamic surfaces used by falling wingless ants to direct their aerial descent are unknown. We conducted ablation experiments to assess the relative contributions of the hindlegs, midlegs and gaster to gliding success in workers of the Neotropical arboreal ant Cephalotes atratus (Hymenoptera: Formicidae). Removal of hindlegs significantly reduced the success rate of directed aerial descent as well as the glide index for successful flights. Removal of the gaster alone did not significantly alter performance relative to controls. Equilibrium glide angles during successful targeting to vertical columns were statistically equivalent between control ants and ants with either the gaster or the hindlegs removed. High-speed video recordings suggested possible use of bilaterally asymmetric motions of the hindlegs to effect body rotations about the vertical axis during targeting manoeuvre. Overall, the control of gliding flight was remarkably robust to dramatic anatomical perturbations, suggesting effective control mechanisms in the face of adverse initial conditions (e.g. falling upside down), variable targeting decisions and turbulent wind gusts during flight. PMID:20236974

  9. Near real-time shadow detection and removal in aerial motion imagery application

    NASA Astrophysics Data System (ADS)

    Silva, Guilherme F.; Carneiro, Grace B.; Doth, Ricardo; Amaral, Leonardo A.; Azevedo, Dario F. G. de

    2018-06-01

    This work presents a method to automatically detect and remove shadows in urban aerial images and its application in an aerospace remote monitoring system requiring near real-time processing. Our detection method generates shadow masks and is accelerated by GPU programming. To obtain the shadow masks, we converted images from RGB to CIELCh model, calculated a modified Specthem ratio, and applied multilevel thresholding. Morphological operations were used to reduce shadow mask noise. The shadow masks are used in the process of removing shadows from the original images using the illumination ratio of the shadow/non-shadow regions. We obtained shadow detection accuracy of around 93% and shadow removal results comparable to the state-of-the-art while maintaining execution time under real-time constraints.

  10. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  11. Photogrammetry of the Viking Lander imagery

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.; Schafer, F. J.

    1982-01-01

    The problem of photogrammetric mapping which uses Viking Lander photography as its basis is solved in two ways: (1) by converting the azimuth and elevation scanning imagery to the equivalent of a frame picture, using computerized rectification; and (2) by interfacing a high-speed, general-purpose computer to the analytical plotter employed, so that all correction computations can be performed in real time during the model-orientation and map-compilation process. Both the efficiency of the Viking Lander cameras and the validity of the rectification method have been established by a series of pre-mission tests which compared the accuracy of terrestrial maps compiled by this method with maps made from aerial photographs. In addition, 1:10-scale topographic maps of Viking Lander sites 1 and 2 having a contour interval of 1.0 cm have been made to test the rectification method.

  12. Estimating belowground carbon stocks in isolated wetlands of the Northern Everglades Watershed, central Florida, using ground penetrating radar (GPR) and aerial imagery

    USGS Publications Warehouse

    McClellan, Matthew; Comas, Xavier; Hinkle, Ross; Sumner, David M.

    2017-01-01

    Peat soils store a large fraction of the global soil carbon (C) pool and comprise 95% of wetland C stocks. While isolated freshwater wetlands in temperate and tropical biomes account for more than 20% of the global peatland C stock, most studies of wetland soil C have occurred in expansive peatlands in northern boreal and subarctic biomes. Furthermore, the contribution of small depressional wetlands in comparison to larger wetland systems in these environments is very uncertain. Given the fact that these wetlands are numerous and variable in terms of their internal geometry, innovative methods are needed for properly estimating belowground C stocks and their overall C contribution to the landscape. In this study, we use a combination of ground penetrating radar (GPR), aerial imagery, and direct measurements (coring) in conjunction with C core analysis to develop a relation between C stock and surface area, and estimate the contribution of subtropical depressional wetlands to the total C stock of pine flatwoods at the Disney Wilderness Preserve (DWP), Florida. Additionally, GPR surveys were able to image collapse structures underneath the peat basin of depressional wetlands, depicting lithological controls on the formation of depressional wetlands at the DWP. Results indicate the importance of depressional wetlands as critical contributors to the landscape C budget at the DWP and the potential of GPR-based approaches for (1) rapidly and noninvasively estimating the contribution of depressional wetlands to regional C stocks and (2) evaluating the formational processes of depressional wetlands.

  13. On the integration of Airborne full-waveform laser scanning and optical imagery for Site Detection and Mapping: Monteserico study case

    NASA Astrophysics Data System (ADS)

    Coluzzi, R.; Guariglia, A.; Lacovara, B.; Lasaponara, R.; Masini, N.

    2009-04-01

    This paper analyses the capability of airborne LiDAR derived data in the recognition of archaeological marks. It also evaluates the benefits to integrate them with aerial photos and very high resolution satellite imagery. The selected test site is Monteserico, a medieval village located on a pastureland hill in the North East of Basilicata (Southern Italy). The site, attested by documentary sources beginning from the 12th century, was discovered by aerial survey in 1996 [1] and investigated in 2005 by using QuickBird imagery [2]. The only architectural evidence is a castle, built on the western top of the hill; whereas on the southern side, earthenware, pottery and crumbling building materials, related to the medieval settlement, could be observed. From a geological point of view, the stratigraphic sequence is composed of Subappennine Clays, Monte Marano sands and Irsina conglomerates. Sporadic herbaceous plants grow over the investigated area. For the purpose of this study, a full-waveform laser scanning with a 240.000 Hz frequency was used. The average point density value of dataset is about 30 points/m2. The final product is a 0.30 m Digital Surface Models (DSMs) accurately modelled. To derive the DSM the point cloud of the ALS was filtered and then classified by applying appropriate algorithms. In this way surface relief and archaeological features were surveyed with great detail. The DSM was compared with other remote sensing data source such as oblique and nadiral aerial photos and QuickBird imagery, acquired in different time. In this way it was possible to evaluate, compare each other and overlay the archaeological features recorded from each data source (aerial, satellite and lidar). Lidar data showed some interesting results. In particular, they allowed for identifying and recording differences in height on the ground produced by surface and shallow archaeological remains (the so-called shadow marks). Most of these features are visible also by the optical

  14. MAPPING EELGRASS SPECIES ZOSTERA ZAPONICA AND Z. MARINA, ASSOCIATED MACROALGAE AND EMERGENT AQUATIC VEGETATION HABITATS IN PACIFIC NORTHWEST ESTUARIES USING NEAR-INFRARED COLOR AERIAL PHOTOGRAPHY AND A HYBRID IMAGE CLASSIFICATION TECHNIQUE

    EPA Science Inventory

    Aerial photographic surveys of Oregon's Yaquina Bay estuary were conducted during consecutive summers from 1997 through 2000. Imagery was obtained during low tide exposures of intertidal mudflats, allowing use of near-infrared color film to detect and discriminate plant communit...

  15. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  16. UAV field demonstration of social media enabled tactical data link

    NASA Astrophysics Data System (ADS)

    Olson, Christopher C.; Xu, Da; Martin, Sean R.; Castelli, Jonathan C.; Newman, Andrew J.

    2015-05-01

    This paper addresses the problem of enabling Command and Control (C2) and data exfiltration functions for missions using small, unmanned, airborne surveillance and reconnaissance platforms. The authors demonstrated the feasibility of using existing commercial wireless networks as the data transmission infrastructure to support Unmanned Aerial Vehicle (UAV) autonomy functions such as transmission of commands, imagery, metadata, and multi-vehicle coordination messages. The authors developed and integrated a C2 Android application for ground users with a common smart phone, a C2 and data exfiltration Android application deployed on-board the UAVs, and a web server with database to disseminate the collected data to distributed users using standard web browsers. The authors performed a mission-relevant field test and demonstration in which operators commanded a UAV from an Android device to search and loiter; and remote users viewed imagery, video, and metadata via web server to identify and track a vehicle on the ground. Social media served as the tactical data link for all command messages, images, videos, and metadata during the field demonstration. Imagery, video, and metadata were transmitted from the UAV to the web server via multiple Twitter, Flickr, Facebook, YouTube, and similar media accounts. The web server reassembled images and video with corresponding metadata for distributed users. The UAV autopilot communicated with the on-board Android device via on-board Bluetooth network.

  17. Comparison of Aerial and Terrestrial Remote Sensing Techniques for Quantifying Forest Canopy Structural Complexity and Estimating Net Primary Productivity

    NASA Astrophysics Data System (ADS)

    Fahey, R. T.; Tallant, J.; Gough, C. M.; Hardiman, B. S.; Atkins, J.; Scheuermann, C. M.

    2016-12-01

    Canopy structure can be an important driver of forest ecosystem functioning - affecting factors such as radiative transfer and light use efficiency, and consequently net primary production (NPP). Both above- (aerial) and below-canopy (terrestrial) remote sensing techniques are used to assess canopy structure and each has advantages and disadvantages. Aerial techniques can cover large geographical areas and provide detailed information on canopy surface and canopy height, but are generally unable to quantitatively assess interior canopy structure. Terrestrial methods provide high resolution information on interior canopy structure and can be cost-effectively repeated, but are limited to very small footprints. Although these methods are often utilized to derive similar metrics (e.g., rugosity, LAI) and to address equivalent ecological questions and relationships (e.g., link between LAI and productivity), rarely are inter-comparisons made between techniques. Our objective is to compare methods for deriving canopy structural complexity (CSC) metrics and to assess the capacity of commonly available aerial remote sensing products (and combinations) to match terrestrially-sensed data. We also assess the potential to combine CSC metrics with image-based analysis to predict plot-based NPP measurements in forests of different ages and different levels of complexity. We use combinations of data from drone-based imagery (RGB, NIR, Red Edge), aerial LiDAR (commonly available medium-density leaf-off), terrestrial scanning LiDAR, portable canopy LiDAR, and a permanent plot network - all collected at the University of Michigan Biological Station. Our results will highlight the potential for deriving functionally meaningful CSC metrics from aerial imagery, LiDAR, and combinations of data sources. We will also present results of modeling focused on predicting plot-level NPP from combinations of image-based vegetation indices (e.g., NDVI, EVI) with LiDAR- or image-derived metrics of

  18. Floating aerial LED signage based on aerial imaging by retro-reflection (AIRR).

    PubMed

    Yamamoto, Hirotsugu; Tomiyama, Yuka; Suyama, Shiro

    2014-11-03

    We propose a floating aerial LED signage technique by utilizing retro-reflection. The proposed display is composed of LEDs, a half mirror, and retro-reflective sheeting. Directivity of the aerial image formation and size of the aerial image have been investigated. Furthermore, a floating aerial LED sign has been successfully formed in free space.

  19. A vegetation mapping strategy for conifer forests by combining airborne LiDAR data and aerial imagery

    Treesearch

    Yanjun Su; Qinghua Guo; Danny L. Fry; Brandon M. Collins; Maggi Kelly; Jacob P. Flanagan; John J. Battles

    2016-01-01

    Abstract. Accurate vegetation mapping is critical for natural resources management, ecological analysis, and hydrological modeling, among other tasks. Remotely sensed multispectral and hyperspectral imageries have proved to be valuable inputs to the vegetation mapping process, but they can provide only limited vegetation structure...

  20. Evolution of a natural debris flow: In situ measurements of flow dynamics, video imagery, and terrestrial laser scanning

    USGS Publications Warehouse

    McCoy, S.W.; Kean, J.W.; Coe, J.A.; Staley, D.M.; Wasklewicz, T.A.; Tucker, G.E.

    2010-01-01

    Many theoretical and laboratory studies have been undertaken to understand debris-flow processes and their associated hazards. However, complete and quantitative data sets from natural debris flows needed for confirmation of these results are limited. We used a novel combination of in situ measurements of debris-flow dynamics, video imagery, and pre- and postflow 2-cm-resolution digital terrain models to study a natural debris-flow event. Our field data constrain the initial and final reach morphology and key flow dynamics. The observed event consisted of multiple surges, each with clear variation of flow properties along the length of the surge. Steep, highly resistant, surge fronts of coarse-grained material without measurable pore-fluid pressure were pushed along by relatively fine-grained and water-rich tails that had a wide range of pore-fluid pressures (some two times greater than hydrostatic). Surges with larger nonequilibrium pore-fluid pressures had longer travel distances. A wide range of travel distances from different surges of similar size indicates that dynamic flow properties are of equal or greater importance than channel properties in determining where a particular surge will stop. Progressive vertical accretion of multiple surges generated the total thickness of mapped debris-flow deposits; nevertheless, deposits had massive, vertically unstratified sedimentological textures. ?? 2010 Geological Society of America.

  1. Motion-seeded object-based attention for dynamic visual imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Kim, Kyungnam

    2017-05-01

    This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.

  2. Unmanned Aerial Vehicle (UAV) data analysis for fertilization dose assessment

    NASA Astrophysics Data System (ADS)

    Kavvadias, Antonis; Psomiadis, Emmanouil; Chanioti, Maroulio; Tsitouras, Alexandros; Toulios, Leonidas; Dercas, Nicholas

    2017-10-01

    The growth rate monitoring of crops throughout their biological cycle is very important as it contributes to the achievement of a uniformly optimum production, a proper harvest planning, and reliable yield estimation. Fertilizer application often dramatically increases crop yields, but it is necessary to find out which is the ideal amount that has to be applied in the field. Remote sensing collects spatially dense information that may contribute to, or provide feedback about, fertilization management decisions. There is a potential goal to accurately predict the amount of fertilizer needed so as to attain an ideal crop yield without excessive use of fertilizers cause financial loss and negative environmental impacts. The comparison of the reflectance values at different wavelengths, utilizing suitable vegetation indices, is commonly used to determine plant vigor and growth. Unmanned Aerial Vehicles (UAVs) have several advantages; because they can be deployed quickly and repeatedly, they are flexible regarding flying height and timing of missions, and they can obtain very high-resolution imagery. In an experimental crop field in Eleftherio Larissa, Greece, different dose of pre-plant and in-season fertilization was applied in 27 plots. A total of 102 aerial photos in two flights were taken using an Unmanned Aerial Vehicle based on the scheduled fertilization. Α correlation of experimental fertilization with the change of vegetation indices values and with the increase of the vegetation cover rate during those days was made. The results of the analysis provide useful information regarding the vigor and crop growth rate performance of various doses of fertilization.

  3. Training Visual Imagery: Improvements of Metacognition, but not Imagery Strength

    PubMed Central

    Rademaker, Rosanne L.; Pearson, Joel

    2012-01-01

    Visual imagery has been closely linked to brain mechanisms involved in perception. Can visual imagery, like visual perception, improve by means of training? Previous research has demonstrated that people can reliably evaluate the vividness of single episodes of imagination – might the metacognition of imagery also improve over the course of training? We had participants imagine colored Gabor patterns for an hour a day, over the course of five consecutive days, and again 2 weeks after training. Participants rated the subjective vividness and effort of their mental imagery on each trial. The influence of imagery on subsequent binocular rivalry dominance was taken as our measure of imagery strength. We found no overall effect of training on imagery strength. Training did, however, improve participant’s metacognition of imagery. Trial-by-trial ratings of vividness gained predictive power on subsequent rivalry dominance as a function of training. These data suggest that, while imagery strength might be immune to training in the current context, people’s metacognitive understanding of mental imagery can improve with practice. PMID:22787452

  4. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  5. Interactive projection for aerial dance using depth sensing camera

    NASA Astrophysics Data System (ADS)

    Dubnov, Tammuz; Seldess, Zachary; Dubnov, Shlomo

    2014-02-01

    This paper describes an interactive performance system for oor and Aerial Dance that controls visual and sonic aspects of the presentation via a depth sensing camera (MS Kinect). In order to detect, measure and track free movement in space, 3 degree of freedom (3-DOF) tracking in space (on the ground and in the air) is performed using IR markers. Gesture tracking and recognition is performed using a simpli ed HMM model that allows robust mapping of the actor's actions to graphics and sound. Additional visual e ects are achieved by segmentation of the actor body based on depth information, allowing projection of separate imagery on the performer and the backdrop. Artistic use of augmented reality performance relative to more traditional concepts of stage design and dramaturgy are discussed.

  6. Observation of coral reefs on Ishigaki Island, Japan, using Landsat TM images and aerial photographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsunaga, Tsuneo; Kayanne, Hajime

    1997-06-01

    Ishigaki Island is located at the southwestern end of Japanese Islands and famous for its fringing coral reefs. More than twenty LANDSAT TM images in twelve years and aerial photographs taken on 1977 and 1994 were used to survey two shallow reefs on this island, Shiraho and Kabira. Intensive field surveys were also conducted in 1995. All satellite images of Shiraho were geometrically corrected and overlaid to construct a multi-date satellite data set. The effects of solar elevation and tide on satellite imagery were studied with this data set. The comparison of aerial and satellite images indicated that significant changesmore » occurred between 1977 and 1984 in Kabira: rapid formation in the western part and decrease in the eastern part of dark patches. The field surveys revealed that newly formed dark patches in the west contain young corals. These results suggest that remote sensing is useful for not only mapping but also monitoring of shallow coral reefs.« less

  7. Applications and Innovations for Use of High Definition and High Resolution Digital Motion Imagery in Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2016-01-01

    The first live High Definition Television (HDTV) from a spacecraft was in November, 2006, nearly ten years before the 2016 SpaceOps Conference. Much has changed since then. Now, live HDTV from the International Space Station (ISS) is routine. HDTV cameras stream live video views of the Earth from the exterior of the ISS every day on UStream, and HDTV has even flown around the Moon on a Japanese Space Agency spacecraft. A great deal has been learned about the operations applicability of HDTV and high resolution imagery since that first live broadcast. This paper will discuss the current state of real-time and file based HDTV and higher resolution video for space operations. A potential roadmap will be provided for further development and innovations of high-resolution digital motion imagery, including gaps in technology enablers, especially for deep space and unmanned missions. Specific topics to be covered in the paper will include: An update on radiation tolerance and performance of various camera types and sensors and ramifications on the future applicability of these types of cameras for space operations; Practical experience with downlinking very large imagery files with breaks in link coverage; Ramifications of larger camera resolutions like Ultra-High Definition, 6,000 [pixels] and 8,000 [pixels] in space applications; Enabling technologies such as the High Efficiency Video Codec, Bundle Streaming Delay Tolerant Networking, Optical Communications and Bayer Pattern Sensors and other similar innovations; Likely future operations scenarios for deep space missions with extreme latency and intermittent communications links.

  8. Facilitating the exploitation of ERTS imagery using snow enhancement techniques. [geological mapping of New England test area

    NASA Technical Reports Server (NTRS)

    Wobber, F. J.; Martin, K. R. (Principal Investigator); Amato, R. V.; Leshendok, T.

    1974-01-01

    The author has identified the following significant results. The procedure for conducting a regional geological mapping program utilizing snow-enhanced ERTS-1 imagery has been summarized. While it is recognized that mapping procedures in geological programs will vary from area to area and from geologist to geologist, it is believed that the procedure tested in this project is applicable over a wide range of mapping programs. The procedure is designed to maximize the utility and value of ERTS-1 imagery and aerial photography within the initial phase of geological mapping programs. Sample products which represent interim steps in the mapping formula (e.g. the ERTS Fracture-Lineament Map) have been prepared. A full account of these procedures and products will be included within the Snow Enhancement Users Manual.

  9. Semi-automted analysis of high-resolution aerial images to quantify docks in Upper Midwest glacial lakes

    USGS Publications Warehouse

    Beck, Marcus W.; Vondracek, Bruce C.; Hatch, Lorin K.; Vinje, Jason

    2013-01-01

    Lake resources can be negatively affected by environmental stressors originating from multiple sources and different spatial scales. Shoreline development, in particular, can negatively affect lake resources through decline in habitat quality, physical disturbance, and impacts on fisheries. The development of remote sensing techniques that efficiently characterize shoreline development in a regional context could greatly improve management approaches for protecting and restoring lake resources. The goal of this study was to develop an approach using high-resolution aerial photographs to quantify and assess docks as indicators of shoreline development. First, we describe a dock analysis workflow that can be used to quantify the spatial extent of docks using aerial images. Our approach incorporates pixel-based classifiers with object-based techniques to effectively analyze high-resolution digital imagery. Second, we apply the analysis workflow to quantify docks for 4261 lakes managed by the Minnesota Department of Natural Resources. Overall accuracy of the analysis results was 98.4% (87.7% based on ) after manual post-processing. The analysis workflow was also 74% more efficient than the time required for manual digitization of docks. These analyses have immediate relevance for resource planning in Minnesota, whereas the dock analysis workflow could be used to quantify shoreline development in other regions with comparable imagery. These data can also be used to better understand the effects of shoreline development on aquatic resources and to evaluate the effects of shoreline development relative to other stressors.

  10. SkySat-1: very high-resolution imagery from a small satellite

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk

    2014-10-01

    This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.

  11. Assessing a potential solution for spatially referencing of historical aerial photography in South Africa

    NASA Astrophysics Data System (ADS)

    Denner, Michele; Raubenheimer, Jacobus H.

    2018-05-01

    Historical aerial photography has become a valuable commodity in any country, as it provides a precise record of historical land management over time. In a developing country, such as South Africa, that has undergone enormous political and social change over the last years, such photography is invaluable as it provides a clear indication of past injustices and serves as an aid to addressing post-apartheid issues such as land reform and land redistribution. National mapping organisations throughout the world have vast repositories of such historical aerial photography. In order to effectively use these datasets in today's digital environment requires that it be georeferenced to an accuracy that is suitable for the intended purpose. Using image-to-image georeferencing techniques, this research sought to determine the accuracies achievable for ortho-rectifying large volumes of historical aerial imagery, against the national standard for ortho-rectification in South Africa, using two different types of scanning equipment. The research conducted four tests using aerial photography from different time epochs over a period of sixty years, where the ortho-rectification matched each test to an already ortho-rectified mosaic of a developed area of mixed land use. The results of each test were assessed in terms of visual accuracy, spatial accuracy and conformance to the national standard for ortho-rectification in South Africa. The results showed a decrease in the overall accuracy of the image as the epoch range between the historical image and the reference image increased. Recommendations on the applications possible given the different epoch ranges and scanning equipment used are provided.

  12. ERTS-1 imagery and high flight photographs as aids to fire hazard appraisal at the NASA San Pablo Reservoir test site

    NASA Technical Reports Server (NTRS)

    Colwell, R. N.

    1973-01-01

    The identification of fire hazards at the San Pablo Reservoir Test Site in California using ERTS-1 data is discussed. It is stated that the two primary fire hazards in the area are caused by wild oat plants and eucalyptus trees. The types of imagery used in conducting the study are reported. Aerial photographs of specific areas are included to show the extent of the fire hazards.

  13. "F*ck It! Let's Get to Drinking-Poison our Livers!": a Thematic Analysis of Alcohol Content in Contemporary YouTube MusicVideos.

    PubMed

    Cranwell, Jo; Britton, John; Bains, Manpreet

    2017-02-01

    The purpose of the present study is to describe the portrayal of alcohol content in popular YouTube music videos. We used inductive thematic analysis to explore the lyrics and visual imagery in 49 UK Top 40 songs and music videos previously found to contain alcohol content and watched by many British adolescents aged between 11 and 18 years and to examine if branded content contravened alcohol industry advertising codes of practice. The analysis generated three themes. First, alcohol content was associated with sexualised imagery or lyrics and the objectification of women. Second, alcohol was associated with image, lifestyle and sociability. Finally, some videos showed alcohol overtly encouraging excessive drinking and drunkenness, including those containing branding, with no negative consequences to the drinker. Our results suggest that YouTube music videos promote positive associations with alcohol use. Further, several alcohol companies adopt marketing strategies in the video medium that are entirely inconsistent with their own or others agreed advertising codes of practice. We conclude that, as a harm reduction measure, policies should change to prevent adolescent exposure to the positive promotion of alcohol and alcohol branding in music videos.

  14. Airborne Hyperspectral Imagery for the Detection of Agricultural Crop Stress

    NASA Technical Reports Server (NTRS)

    Cassady, Philip E.; Perry, Eileen M.; Gardner, Margaret E.; Roberts, Dar A.

    2001-01-01

    Multispectral digital imagery from aircraft or satellite is presently being used to derive basic assessments of crop health for growers and others involved in the agricultural industry. Research indicates that narrow band stress indices derived from hyperspectral imagery should have improved sensitivity to provide more specific information on the type and cause of crop stress, Under funding from the NASA Earth Observation Commercial Applications Program (EOCAP) we are identifying and evaluating scientific and commercial applications of hyperspectral imagery for the remote characterization of agricultural crop stress. During the summer of 1999 a field experiment was conducted with varying nitrogen treatments on a production corn-field in eastern Nebraska. The AVIRIS (Airborne Visible-Infrared Imaging Spectrometer) hyperspectral imager was flown at two critical dates during crop development, at two different altitudes, providing images with approximately 18m pixels and 3m pixels. Simultaneous supporting soil and crop characterization included spectral reflectance measurements above the canopy, biomass characterization, soil sampling, and aerial photography. In this paper we describe the experiment and results, and examine the following three issues relative to the utility of hyperspectral imagery for scientific study and commercial crop stress products: (1) Accuracy of reflectance derived stress indices relative to conventional measures of stress. We compare reflectance-derived indices (both field radiometer and AVIRIS) with applied nitrogen and with leaf level measurement of nitrogen availability and chlorophyll concentrations over the experimental plots (4 replications of 5 different nitrogen levels); (2) Ability of the hyperspectral sensors to detect sub-pixel areas under crop stress. We applied the stress indices to both the 3m and 18m AVIRIS imagery for the entire production corn field using several sub-pixel areas within the field to compare the relative

  15. Assessing Long-Term Seagrass Changes by Integrating a High-Spatial Resolution Image, Historical Aerial Photography and Field Data

    NASA Astrophysics Data System (ADS)

    Leon-Perez, M.; Hernandez, W. J.; Armstrong, R.

    2016-02-01

    Reported cases of seagrass loss have increased over the last 40 years, increasing the awareness of the need for assessing seagrass health. In situ monitoring has been the main method to assess spatial and temporal changes in seagrass ecosystem. Although remote sensing techniques with multispectral imagery have been recently used for these purposes, long-term analysis is limited to the sensor's mission life. The objective of this project is to determine long-term changes in seagrass habitat cover at Caja de Muertos Island Nature Reserve, by combining in situ data with a satellite image and historical aerial photography. A current satellite imagery of the WorldView-2 sensor was used to generate a 2014 benthic habitat map for the study area. The multispectral image was pre-processed using: conversion of digital numbers to radiance, and atmospheric and water column corrections. Object-based image analysis was used to segment the image into polygons representing different benthic habitats and to classify those habitats according to the classification scheme developed for this project. The scheme include the following benthic habitat categories: seagrass (sparse, dense and very dense), colonized hard bottom (sparse, dense and very dense), sand and mix algae on unconsolidated sediments. Field work was used to calibrate the satellite-derived benthic maps and to asses accuracy of the final products. In addition, a time series of satellite imagery and historic aerial photography from 1950 to 2014 provided data to assess long-term changes in seagrass habitat cover within the Reserve. Preliminary results show an increase in seagrass habitat cover, contrasting with the worldwide declining trend. The results of this study will provide valuable information for the conservation and management of seagrass habitat in the Caja de Muertos Island Nature Reserve.

  16. Privacy information management for video surveillance

    NASA Astrophysics Data System (ADS)

    Luo, Ying; Cheung, Sen-ching S.

    2013-05-01

    The widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been proposed to automatically redact images of trusted individuals in the surveillance video. To identify these individuals for protection, the most reliable approach is to use biometric signals such as iris patterns as they are immutable and highly discriminative. In this paper, we propose a privacy data management system to be used in a privacy-aware video surveillance system. The privacy status of a subject is anonymously determined based on her iris pattern. For a trusted subject, the surveillance video is redacted and the original imagery is considered to be the privacy information. Our proposed system allows a subject to access her privacy information via the same biometric signal for privacy status determination. Two secure protocols, one for privacy information encryption and the other for privacy information retrieval are proposed. Error control coding is used to cope with the variability in iris patterns and efficient implementation is achieved using surrogate data records. Experimental results on a public iris biometric database demonstrate the validity of our framework.

  17. Vehicle classification in WAMI imagery using deep network

    NASA Astrophysics Data System (ADS)

    Yi, Meng; Yang, Fan; Blasch, Erik; Sheaff, Carolyn; Liu, Kui; Chen, Genshe; Ling, Haibin

    2016-05-01

    Humans have always had a keen interest in understanding activities and the surrounding environment for mobility, communication, and survival. Thanks to recent progress in photography and breakthroughs in aviation, we are now able to capture tens of megapixels of ground imagery, namely Wide Area Motion Imagery (WAMI), at multiple frames per second from unmanned aerial vehicles (UAVs). WAMI serves as a great source for many applications, including security, urban planning and route planning. These applications require fast and accurate image understanding which is time consuming for humans, due to the large data volume and city-scale area coverage. Therefore, automatic processing and understanding of WAMI imagery has been gaining attention in both industry and the research community. This paper focuses on an essential step in WAMI imagery analysis, namely vehicle classification. That is, deciding whether a certain image patch contains a vehicle or not. We collect a set of positive and negative sample image patches, for training and testing the detector. Positive samples are 64 × 64 image patches centered on annotated vehicles. We generate two sets of negative images. The first set is generated from positive images with some location shift. The second set of negative patches is generated from randomly sampled patches. We also discard those patches if a vehicle accidentally locates at the center. Both positive and negative samples are randomly divided into 9000 training images and 3000 testing images. We propose to train a deep convolution network for classifying these patches. The classifier is based on a pre-trained AlexNet Model in the Caffe library, with an adapted loss function for vehicle classification. The performance of our classifier is compared to several traditional image classifier methods using Support Vector Machine (SVM) and Histogram of Oriented Gradient (HOG) features. While the SVM+HOG method achieves an accuracy of 91.2%, the accuracy of our deep

  18. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  19. Supervised classification of aerial imagery and multi-source data fusion for flood assessment

    NASA Astrophysics Data System (ADS)

    Sava, E.; Harding, L.; Cervone, G.

    2015-12-01

    Floods are among the most devastating natural hazards and the ability to produce an accurate and timely flood assessment before, during, and after an event is critical for their mitigation and response. Remote sensing technologies have become the de-facto approach for observing the Earth and its environment. However, satellite remote sensing data are not always available. For these reasons, it is crucial to develop new techniques in order to produce flood assessments during and after an event. Recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed emergency responders to more efficiently extract increasingly precise and relevant knowledge from the available information. This research presents a fusion technique using satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and tweets. A new computational methodology is proposed based on machine learning algorithms to automatically identify water pixels in CAP imagery. Specifically, wavelet transformations are paired with multiple classifiers, run in parallel, to build models discriminating water and non-water regions. The learned classification models are first tested against a set of control cases, and then used to automatically classify each image separately. A measure of uncertainty is computed for each pixel in an image proportional to the number of models classifying the pixel as water. Geo-tagged tweets are continuously harvested and stored on a MongoDB and queried in real time. They are fused with CAP classified data, and with satellite remote sensing derived flood extent results to produce comprehensive flood assessment maps. The final maps are then compared with FEMA generated flood extents to assess their accuracy. The proposed methodology is applied on two test cases, relative to the 2013 floods in Boulder CO, and the 2015 floods in Texas.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan Hruska

    Currently, small Unmanned Aerial Vehicles (UAVs) are primarily used for capturing and down-linking real-time video. To date, their role as a low-cost airborne platform for capturing high-resolution, georeferenced still imagery has not been fully utilized. On-going work within the Unmanned Vehicle Systems Program at the Idaho National Laboratory (INL) is attempting to exploit this small UAV-acquired, still imagery potential. Initially, a UAV-based still imagery work flow model was developed that includes initial UAV mission planning, sensor selection, UAV/sensor integration, and imagery collection, processing, and analysis. Components to support each stage of the work flow are also being developed. Critical tomore » use of acquired still imagery is the ability to detect changes between images of the same area over time. To enhance the analysts’ change detection ability, a UAV-specific, GIS-based change detection system called SADI or System for Analyzing Differences in Imagery is under development. This paper will discuss the associated challenges and approaches to collecting still imagery with small UAVs. Additionally, specific components of the developed work flow system will be described and graphically illustrated using varied examples of small UAV-acquired still imagery.« less

  1. Advanced text and video analytics for proactive decision making

    NASA Astrophysics Data System (ADS)

    Bowman, Elizabeth K.; Turek, Matt; Tunison, Paul; Porter, Reed; Thomas, Steve; Gintautas, Vadas; Shargo, Peter; Lin, Jessica; Li, Qingzhe; Gao, Yifeng; Li, Xiaosheng; Mittu, Ranjeev; Rosé, Carolyn Penstein; Maki, Keith; Bogart, Chris; Choudhari, Samrihdi Shree

    2017-05-01

    Today's warfighters operate in a highly dynamic and uncertain world, and face many competing demands. Asymmetric warfare and the new focus on small, agile forces has altered the framework by which time critical information is digested and acted upon by decision makers. Finding and integrating decision-relevant information is increasingly difficult in data-dense environments. In this new information environment, agile data algorithms, machine learning software, and threat alert mechanisms must be developed to automatically create alerts and drive quick response. Yet these advanced technologies must be balanced with awareness of the underlying context to accurately interpret machine-processed indicators and warnings and recommendations. One promising approach to this challenge brings together information retrieval strategies from text, video, and imagery. In this paper, we describe a technology demonstration that represents two years of tri-service research seeking to meld text and video for enhanced content awareness. The demonstration used multisource data to find an intelligence solution to a problem using a common dataset. Three technology highlights from this effort include 1) Incorporation of external sources of context into imagery normalcy modeling and anomaly detection capabilities, 2) Automated discovery and monitoring of targeted users from social media text, regardless of language, and 3) The concurrent use of text and imagery to characterize behaviour using the concept of kinematic and text motifs to detect novel and anomalous patterns. Our demonstration provided a technology baseline for exploiting heterogeneous data sources to deliver timely and accurate synopses of data that contribute to a dynamic and comprehensive worldview.

  2. Correlation and registration of ERTS multispectral imagery. [by a digital processing technique

    NASA Technical Reports Server (NTRS)

    Bonrud, L. O.; Henrikson, P. J.

    1974-01-01

    Examples of automatic digital processing demonstrate the feasibility of registering one ERTS multispectral scanner (MSS) image with another obtained on a subsequent orbit, and automatic matching, correlation, and registration of MSS imagery with aerial photography (multisensor correlation) is demonstrated. Excellent correlation was obtained with patch sizes exceeding 16 pixels square. Qualities which lead to effective control point selection are distinctive features, good contrast, and constant feature characteristics. Results of the study indicate that more than 300 degrees of freedom are required to register two standard ERTS-1 MSS frames covering 100 by 100 nautical miles to an accuracy of 0.6 pixel mean radial displacement error. An automatic strip processing technique demonstrates 600 to 1200 degrees of freedom over a quater frame of ERTS imagery. Registration accuracies in the range of 0.3 pixel to 0.5 pixel mean radial error were confirmed by independent error analysis. Accuracies in the range of 0.5 pixel to 1.4 pixel mean radial error were demonstrated by semi-automatic registration over small geographic areas.

  3. An improved procedure for detection and enumeration of walrus signatures in airborne thermal imagery

    USGS Publications Warehouse

    Burn, Douglas M.; Udevitz, Mark S.; Speckman, Suzann G.; Benter, R. Bradley

    2009-01-01

    In recent years, application of remote sensing to marine mammal surveys has been a promising area of investigation for wildlife managers and researchers. In April 2006, the United States and Russia conducted an aerial survey of Pacific walrus (Odobenus rosmarus divergens) using thermal infrared sensors to detect groups of animals resting on pack ice in the Bering Sea. The goal of this survey was to estimate the size of the Pacific walrus population. An initial analysis of the U.S. data using previously-established methods resulted in lower detectability of walrus groups in the imagery and higher variability in calibration models than was expected based on pilot studies. This paper describes an improved procedure for detection and enumeration of walrus groups in airborne thermal imagery. Thermal images were first subdivided into smaller 200 x 200 pixel "tiles." We calculated three statistics to represent characteristics of walrus signatures from the temperature histogram for each the. Tiles that exhibited one or more of these characteristics were examined further to determine if walrus signatures were present. We used cluster analysis on tiles that contained walrus signatures to determine which pixels belonged to each group. We then calculated a thermal index value for each walrus group in the imagery and used generalized linear models to estimate detection functions (the probability of a group having a positive index value) and calibration functions (the size of a group as a function of its index value) based on counts from matched digital aerial photographs. The new method described here improved our ability to detect walrus groups at both 2 m and 4 m spatial resolution. In addition, the resulting calibration models have lower variance than the original method. We anticipate that the use of this new procedure will greatly improve the quality of the population estimate derived from these data. This procedure may also have broader applicability to thermal infrared

  4. Reliable motion detection of small targets in video with low signal-to-clutter ratios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, S.A.; Naylor, R.B.

    1995-07-01

    Studies show that vigilance decreases rapidly after several minutes when human operators are required to search live video for infrequent intrusion detections. Therefore, there is a need for systems which can automatically detect targets in live video and reserve the operator`s attention for assessment only. Thus far, automated systems have not simultaneously provided adequate detection sensitivity, false alarm suppression, and ease of setup when used in external, unconstrained environments. This unsatisfactory performance can be exacerbated by poor video imagery with low contrast, high noise, dynamic clutter, image misregistration, and/or the presence of small, slow, or erratically moving targets. This papermore » describes a highly adaptive video motion detection and tracking algorithm which has been developed as part of Sandia`s Advanced Exterior Sensor (AES) program. The AES is a wide-area detection and assessment system for use in unconstrained exterior security applications. The AES detection and tracking algorithm provides good performance under stressing data and environmental conditions. Features of the algorithm include: reliable detection with negligible false alarm rate of variable velocity targets having low signal-to-clutter ratios; reliable tracking of targets that exhibit motion that is non-inertial, i.e., varies in direction and velocity; automatic adaptation to both infrared and visible imagery with variable quality; and suppression of false alarms caused by sensor flaws and/or cutouts.« less

  5. Investigation of selected imagery from SKYLAB/EREP S190 system for medium and small scale mapping

    NASA Technical Reports Server (NTRS)

    Stewart, R. A.

    1975-01-01

    Satellite photography provided by the Skylab mission was investigated as a tool in planimetric mapping at medium and small scales over land surface in Canada. The main interest involved the potential usage of Skylab imagery for new and revision line mapping, photomapping possibilities, and the application of this photography as control for conventional high altitude aerial surveys. The results of six independent investigations clearly indicate that certain selected sets of this photography are adequate for planimetric mapping purposes at scales of 1:250,000 and smaller. In limited cases, the NATO planimetric accuracy requirements for Class B 1:50,000 scale mapping were also achieved. Of the S190A photography system, the camera containing the Pan X Aerial Black and White film offers the greatest potential to mapping at small scales. However, the S190B system continually proved to offer more versatility throughout the entire investigation.

  6. Aerial Photography Summary Record System

    USGS Publications Warehouse

    ,

    1998-01-01

    The Aerial Photography Summary Record System (APSRS) describes aerial photography projects that meet specified criteria over a given geographic area of the United States and its territories. Aerial photographs are an important tool in cartography and a number of other professions. Land use planners, real estate developers, lawyers, environmental specialists, and many other professionals rely on detailed and timely aerial photographs. Until 1975, there was no systematic approach to locate an aerial photograph, or series of photographs, quickly and easily. In that year, the U.S. Geological Survey (USGS) inaugurated the APSRS, which has become a standard reference for users of aerial photographs.

  7. Potential digitization/compression techniques for Shuttle video

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Batson, B. H.

    1978-01-01

    The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.

  8. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  9. Polar bears from space: assessing satellite imagery as a tool to track Arctic wildlife.

    PubMed

    Stapleton, Seth; LaRue, Michelle; Lecomte, Nicolas; Atkinson, Stephen; Garshelis, David; Porter, Claire; Atwood, Todd

    2014-01-01

    Development of efficient techniques for monitoring wildlife is a priority in the Arctic, where the impacts of climate change are acute and remoteness and logistical constraints hinder access. We evaluated high resolution satellite imagery as a tool to track the distribution and abundance of polar bears. We examined satellite images of a small island in Foxe Basin, Canada, occupied by a high density of bears during the summer ice-free season. Bears were distinguished from other light-colored spots by comparing images collected on different dates. A sample of ground-truthed points demonstrated that we accurately classified bears. Independent observers reviewed images and a population estimate was obtained using mark-recapture models. This estimate (N: 94; 95% Confidence Interval: 92-105) was remarkably similar to an abundance estimate derived from a line transect aerial survey conducted a few days earlier (N: 102; 95% CI: 69-152). Our findings suggest that satellite imagery is a promising tool for monitoring polar bears on land, with implications for use with other Arctic wildlife. Large scale applications may require development of automated detection processes to expedite review and analysis. Future research should assess the utility of multi-spectral imagery and examine sites with different environmental characteristics.

  10. Polar Bears from Space: Assessing Satellite Imagery as a Tool to Track Arctic Wildlife

    PubMed Central

    Stapleton, Seth; LaRue, Michelle; Lecomte, Nicolas; Atkinson, Stephen; Garshelis, David; Porter, Claire; Atwood, Todd

    2014-01-01

    Development of efficient techniques for monitoring wildlife is a priority in the Arctic, where the impacts of climate change are acute and remoteness and logistical constraints hinder access. We evaluated high resolution satellite imagery as a tool to track the distribution and abundance of polar bears. We examined satellite images of a small island in Foxe Basin, Canada, occupied by a high density of bears during the summer ice-free season. Bears were distinguished from other light-colored spots by comparing images collected on different dates. A sample of ground-truthed points demonstrated that we accurately classified bears. Independent observers reviewed images and a population estimate was obtained using mark–recapture models. This estimate (: 94; 95% Confidence Interval: 92–105) was remarkably similar to an abundance estimate derived from a line transect aerial survey conducted a few days earlier (: 102; 95% CI: 69–152). Our findings suggest that satellite imagery is a promising tool for monitoring polar bears on land, with implications for use with other Arctic wildlife. Large scale applications may require development of automated detection processes to expedite review and analysis. Future research should assess the utility of multi-spectral imagery and examine sites with different environmental characteristics. PMID:25006979

  11. Quantifying sub-pixel urban impervious surface through fusion of optical and inSAR imagery

    USGS Publications Warehouse

    Yang, L.; Jiang, L.; Lin, H.; Liao, M.

    2009-01-01

    In this study, we explored the potential to improve urban impervious surface modeling and mapping with the synergistic use of optical and Interferometric Synthetic Aperture Radar (InSAR) imagery. We used a Classification and Regression Tree (CART)-based approach to test the feasibility and accuracy of quantifying Impervious Surface Percentage (ISP) using four spectral bands of SPOT 5 high-resolution geometric (HRG) imagery and three parameters derived from the European Remote Sensing (ERS)-2 Single Look Complex (SLC) SAR image pair. Validated by an independent ISP reference dataset derived from the 33 cm-resolution digital aerial photographs, results show that the addition of InSAR data reduced the ISP modeling error rate from 15.5% to 12.9% and increased the correlation coefficient from 0.71 to 0.77. Spatially, the improvement is especially noted in areas of vacant land and bare ground, which were incorrectly mapped as urban impervious surfaces when using the optical remote sensing data. In addition, the accuracy of ISP prediction using InSAR images alone is only marginally less than that obtained by using SPOT imagery. The finding indicates the potential of using InSAR data for frequent monitoring of urban settings located in cloud-prone areas.

  12. Polar bears from space: Assessing satellite imagery as a tool to track Arctic wildlife

    USGS Publications Warehouse

    Stapleton, Seth P.; LaRue, Michelle A.; Lecomte, Nicolas; Atkinson, Stephen N.; Garshelis, David L.; Porter, Claire; Atwood, Todd C.

    2014-01-01

    Development of efficient techniques for monitoring wildlife is a priority in the Arctic, where the impacts of climate change are acute and remoteness and logistical constraints hinder access. We evaluated high resolution satellite imagery as a tool to track the distribution and abundance of polar bears. We examined satellite images of a small island in Foxe Basin, Canada, occupied by a high density of bears during the summer ice-free season. Bears were distinguished from other light-colored spots by comparing images collected on different dates. A sample of ground-truthed points demonstrated that we accurately classified bears. Independent observers reviewed images and a population estimate was obtained using mark- recapture models. This estimate (N: 94; 95% Confidence Interval: 92-105) was remarkably similar to an abundance estimate derived from a line transect aerial survey conducted a few days earlier (N: 102; 95% CI: 69-152). Our findings suggest that satellite imagery is a promising tool for monitoring polar bears on land, with implications for use with other Arctic wildlife. Large scale applications may require development of automated detection processes to expedite review and analysis. Future research should assess the utility of multi-spectral imagery and examine sites with different environmental characteristics.

  13. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  14. The pan-sharpening of satellite and UAV imagery for agricultural applications

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Woroszkiewicz, Malgorzata

    2016-10-01

    Remote sensing techniques are widely used in many different areas of interest, i.e. urban studies, environmental studies, agriculture, etc., due to fact that they provide rapid, accurate and information over large areas with optimal time, spatial and spectral resolutions. Agricultural management is one of the most common application of remote sensing methods nowadays. Monitoring of agricultural sites and creating information regarding spatial distribution and characteristics of crops are important tasks to provide data for precision agriculture, crop management and registries of agricultural lands. For monitoring of cultivated areas many different types of remote sensing data can be used- most popular are multispectral satellites imagery. Such data allow for generating land use and land cover maps, based on various methods of image processing and remote sensing methods. This paper presents fusion of satellite and unnamed aerial vehicle (UAV) imagery for agricultural applications, especially for distinguishing crop types. Authors in their article presented chosen data fusion methods for satellite images and data obtained from low altitudes. Moreover the authors described pan- sharpening approaches and applied chosen pan- sharpening methods for multiresolution image fusion of satellite and UAV imagery. For such purpose, satellite images from Landsat- 8 OLI sensor and data collected within various UAV flights (with mounted RGB camera) were used. In this article, the authors not only had shown the potential of fusion of satellite and UAV images, but also presented the application of pan- sharpening in crop identification and management.

  15. Standards for efficient employment of wide-area motion imagery (WAMI) sensors

    NASA Astrophysics Data System (ADS)

    Randall, L. Scott; Maenner, Paul F.

    2013-05-01

    Airborne Wide Area Motion Imagery (WAMI) sensors provide the opportunity for continuous high-resolution surveillance of geographic areas covering tens of square kilometers. This is both a blessing and a curse. Data volumes from "gigapixel-class" WAMI sensors are orders of magnitude greater than for traditional "megapixel-class" video sensors. The amount of data greatly exceeds the capacities of downlinks to ground stations, and even if this were not true, the geographic coverage is too large for effective human monitoring. Although collected motion imagery is recorded on the platform, typically only small "windows" of the full field of view are transmitted to the ground; the full set of collected data can be retrieved from the recording device only after the mission has concluded. Thus, the WAMI environment presents several difficulties: (1) data is too massive for downlink; (2) human operator selection and control of the video windows may not be effective; (3) post-mission storage and dissemination may be limited by inefficient file formats; and (4) unique system implementation characteristics may thwart exploitation by available analysis tools. To address these issues, the National Geospatial-Intelligence Agency's Motion Imagery Standards Board (MISB) is developing relevant standard data exchange formats: (1) moving target indicator (MTI) and tracking metadata to support tipping and cueing of WAMI windows using "watch boxes" and "trip wires"; (2) control channel commands for positioning the windows within the full WAMI field of view; and (3) a full-field-of-view spatiotemporal tiled file format for efficient storage, retrieval, and dissemination. The authors previously provided an overview of this suite of standards. This paper describes the latest progress, with specific concentration on a detailed description of the spatiotemporal tiled file format.

  16. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  17. The Development and Flight Testing of an Aerially Deployed Unmanned Aerial System

    NASA Astrophysics Data System (ADS)

    Smith, Andrew

    An investigation into the feasibility of aerial deployed unmanned aerial vehicles was completed. The investigation included the development and flight testing of multiple unmanned aerial systems to investigate the different components of potential aerial deployment missions. The project consisted of two main objectives; the first objective dealt with the development of an airframe capable of surviving aerial deployment from a rocket and then self assembling from its stowed configuration into its flight configuration. The second objective focused on the development of an autopilot capable of performing basic guidance, navigation, and control following aerial deployment. To accomplish these two objectives multiple airframes were developed to verify their completion experimentally. The first portion of the project, investigating the feasibility of surviving an aerial deployment, was completed using a fixed wing glider that following a successful deployment had 52 seconds of controlled flight. Before developing the autopilot in the second phase of the project, the glider was significantly upgraded to fix faults discovered in the glider flight testing and to enhance the system capabilities. Unfortunately to conform to outdoor flight restrictions imposed by the university and the Federal Aviation Administration it was required to switch airframes before flight testing of the new fixed wing platform could begin. As a result, an autopilot was developed for a quadrotor and verified experimentally completely indoors to remain within the limits of governing policies.

  18. Cue-Reactive Rationality, Visual Imagery and Volitional Control Predict Cue-Reactive Urge to Gamble in Poker-Machine Gamblers.

    PubMed

    Clark, Gavin I; Rock, Adam J; McKeith, Charles F A; Coventry, William L

    2017-09-01

    Poker-machine gamblers have been demonstrated to report increases in the urge to gamble following exposure to salient gambling cues. However, the processes which contribute to this urge to gamble remain to be understood. The present study aimed to investigate whether changes in the conscious experience of visual imagery, rationality and volitional control (over one's thoughts, images and attention) predicted changes in the urge to gamble following exposure to a gambling cue. Thirty-one regular poker-machine gamblers who reported at least low levels of problem gambling on the Problem Gambling Severity Index (PGSI), were recruited to complete an online cue-reactivity experiment. Participants completed the PGSI, the visual imagery, rationality and volitional control subscales of the Phenomenology of Consciousness Inventory (PCI), and a visual analogue scale (VAS) assessing urge to gamble. Participants completed the PCI subscales and VAS at baseline, following a neutral video cue and following a gambling video cue. Urge to gamble was found to significantly increase from neutral cue to gambling cue (while controlling for baseline urge) and this increase was predicted by PGSI score. After accounting for the effects of problem-gambling severity, cue-reactive visual imagery, rationality and volitional control significantly improved the prediction of cue-reactive urge to gamble. The small sample size and limited participant characteristic data restricts the generalizability of the findings. Nevertheless, this is the first study to demonstrate that changes in the subjective experience of visual imagery, volitional control and rationality predict changes in the urge to gamble from neutral to gambling cue. The results suggest that visual imagery, rationality and volitional control may play an important role in the experience of the urge to gamble in poker-machine gamblers.

  19. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    USDA-ARS?s Scientific Manuscript database

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  20. Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles

    PubMed Central

    Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.

    2017-01-01

    Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985

  1. Imagery Integration Team

    NASA Technical Reports Server (NTRS)

    Calhoun, Tracy; Melendrez, Dave

    2014-01-01

    The Human Exploration Science Office (KX) provides leadership for NASA's Imagery Integration (Integration 2) Team, an affiliation of experts in the use of engineering-class imagery intended to monitor the performance of launch vehicles and crewed spacecraft in flight. Typical engineering imagery assessments include studying and characterizing the liftoff and ascent debris environments; launch vehicle and propulsion element performance; in-flight activities; and entry, landing, and recovery operations. Integration 2 support has been provided not only for U.S. Government spaceflight (e.g., Space Shuttle, Ares I-X) but also for commercial launch providers, such as Space Exploration Technologies Corporation (SpaceX) and Orbital Sciences Corporation, servicing the International Space Station. The NASA Integration 2 Team is composed of imagery integration specialists from JSC, the Marshall Space Flight Center (MSFC), and the Kennedy Space Center (KSC), who have access to a vast pool of experience and capabilities related to program integration, deployment and management of imagery assets, imagery data management, and photogrammetric analysis. The Integration 2 team is currently providing integration services to commercial demonstration flights, Exploration Flight Test-1 (EFT-1), and the Space Launch System (SLS)-based Exploration Missions (EM)-1 and EM-2. EM-2 will be the first attempt to fly a piloted mission with the Orion spacecraft. The Integration 2 Team provides the customer (both commercial and Government) with access to a wide array of imagery options - ground-based, airborne, seaborne, or vehicle-based - that are available through the Government and commercial vendors. The team guides the customer in assembling the appropriate complement of imagery acquisition assets at the customer's facilities, minimizing costs associated with market research and the risk of purchasing inadequate assets. The NASA Integration 2 capability simplifies the process of securing one

  2. Writing Assignments in Disguise: Lessons Learned Using Video Projects in the Classroom

    NASA Astrophysics Data System (ADS)

    Wade, P.; Courtney, A.

    2012-12-01

    This study describes the instructional approach of using student-created video documentaries as projects in an undergraduate non-science majors' Energy Perspectives science course. Four years of teaching this course provided many reflective teaching moments from which we have enhanced our instructional approach to teaching students how to construct a quality Ken Burn's style science video. Fundamental to a good video documentary is the story told via a narrative which involves significant writing, editing and rewriting. Many students primarily associate a video documentary with visual imagery and do not realize the importance of writing in the production of the video. Required components of the student-created video include: 1) select a topic, 2) conduct research, 3) write an outline, 4) write a narrative, 5) construct a project storyboard, 6) shoot or acquire video and photos (from legal sources), 7) record the narrative, 8) construct the video documentary, 9) edit and 10) finalize the project. Two knowledge survey instruments (administered pre- and post) were used for assessment purposes. One survey focused on the skills necessary to research and produce video documentaries and the second survey assessed students' content knowledge acquired from each documentary. This talk will focus on the components necessary for video documentaries and the instructional lessons learned over the years. Additionally, results from both surveys and student reflections of the video project will be shared.

  3. 2. AERIAL VIEW OF MINUTEMAN SILOS. Low oblique aerial view ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. AERIAL VIEW OF MINUTEMAN SILOS. Low oblique aerial view (original in color) of the two launch silos, covered. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Missile Silo Type, Test Area 1-100, northeast end of Test Area 1-100 Road, Boron, Kern County, CA

  4. Analysis of ERTS imagery using special electronic viewing/measuring equipment

    NASA Technical Reports Server (NTRS)

    Evans, W. E.; Serebreny, S. M.

    1973-01-01

    An electronic satellite image analysis console (ESIAC) is being employed to process imagery for use by USGS investigators in several different disciplines studying dynamic hydrologic conditions. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. Quantitative measurements of distances, areas, and brightness profiles can be extracted digitally under operator supervision. Initial results are presented for the display and measurement of snowfield extent, glacier development, sediment plumes from estuary discharge, playa inventory, phreatophyte and other vegetative changes.

  5. Aerial Explorers and Robotic Ecosystems

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Greg

    2004-01-01

    A unique bio-inspired approach to autonomous aerial vehicle, a.k.a. aerial explorer technology is discussed. The work is focused on defining and studying aerial explorer mission concepts, both as an individual robotic system and as a member of a small robotic "ecosystem." Members of this robotic ecosystem include the aerial explorer, air-deployed sensors and robotic symbiotes, and other assets such as rovers, landers, and orbiters.

  6. Photogrammetric Processing of IceBridge DMS Imagery into High-Resolution Digital Surface Models (DEM and Visible Overlay)

    NASA Astrophysics Data System (ADS)

    Arvesen, J. C.; Dotson, R. C.

    2014-12-01

    The DMS (Digital Mapping System) has been a sensor component of all DC-8 and P-3 IceBridge flights since 2009 and has acquired over 3 million JPEG images over Arctic and Antarctic land and sea ice. The DMS imagery is primarily used for identifying and locating open leads for LiDAR sea-ice freeboard measurements and documenting snow and ice surface conditions. The DMS is a COTS Canon SLR camera utilizing a 28mm focal length lens, resulting in a 10cm GSD and swath of ~400 meters from a nominal flight altitude of 500 meters. Exterior orientation is provided by an Applanix IMU/GPS which records a TTL pulse coincident with image acquisition. Notable for virtually all IceBridge flights is that parallel grids are not flown and thus there is no ability to photogrammetrically tie any imagery to adjacent flight lines. Approximately 800,000 Level-3 DMS Surface Model data products have been delivered to NSIDC, each consisting of a Digital Elevation Model (GeoTIFF DEM) and a co-registered Visible Overlay (GeoJPEG). Absolute elevation accuracy for each individual Elevation Model is adjusted to concurrent Airborne Topographic Mapper (ATM) Lidar data, resulting in higher elevation accuracy than can be achieved by photogrammetry alone. The adjustment methodology forces a zero mean difference to the corresponding ATM point cloud integrated over each DMS frame. Statistics are calculated for each DMS Elevation Model frame and show RMS differences are within +/- 10 cm with respect to the ATM point cloud. The DMS Surface Model possesses similar elevation accuracy to the ATM point cloud, but with the following advantages: · Higher and uniform spatial resolution: 40 cm GSD · 45% wider swath: 435 meters vs. 300 meters at 500 meter flight altitude · Visible RGB co-registered overlay at 10 cm GSD · Enhanced visualization through 3-dimensional virtual reality (i.e. video fly-through) Examples will be presented of the utility of these advantages and a novel use of a cell phone camera for

  7. Regional albedo of Arctic first-year drift ice in advanced stages of melt from the combination of in situ measurements and aerial imagery

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Granskog, M. A.; Hudson, S. R.; Pedersen, C. A.; Karlsen, T. I.; Divina, S. A.; Gerland, S.

    2014-07-01

    The paper presents a case study of the regional (≈ 150 km) broadband albedo of first year Arctic sea ice in advanced stages of melt, estimated from a combination of in situ albedo measurements and aerial imagery. The data were collected during the eight day ICE12 drift experiment carried out by the Norwegian Polar Institute in the Arctic north of Svalbard at 82.3° N from 26 July to 3 August 2012. The study uses in situ albedo measurements representative of the four main surface types: bare ice, dark melt ponds, bright melt ponds and open water. Images acquired by a helicopter borne camera system during ice survey flights covered about 28 km2. A subset of > 8000 images from the area of homogeneous melt with open water fraction of ≈ 0.11 and melt pond coverage of ≈ 0.25 used in the upscaling yielded a regional albedo estimate of 0.40 (0.38; 0.42). The 95% confidence interval on the estimate was derived using the moving block bootstrap approach applied to sequences of classified sea ice images and albedo of the four surface types treated as random variables. Uncertainty in the mean estimates of surface type albedo from in situ measurements contributed some 95% of the variance of the estimated regional albedo, with the remaining variance resulting from the spatial inhomogeneity of sea ice cover. The results of the study are of relevance for the modeling of sea ice processes in climate simulations. It particularly concerns the period of summer melt, when the optical properties of sea ice undergo substantial changes, which existing sea ice models have significant diffuculty accurately reproducing.

  8. PixonVision real-time video processor

    NASA Astrophysics Data System (ADS)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  9. Post-Disaster Damage Assessment Through Coherent Change Detection on SAR Imagery

    NASA Astrophysics Data System (ADS)

    Guida, L.; Boccardo, P.; Donevski, I.; Lo Schiavo, L.; Molinari, M. E.; Monti-Guarnieri, A.; Oxoli, D.; Brovelli, M. A.

    2018-04-01

    Damage assessment is a fundamental step to support emergency response and recovery activities in a post-earthquake scenario. In recent years, UAVs and satellite optical imagery was applied to assess major structural damages before technicians could reach the areas affected by the earthquake. However, bad weather conditions may harm the quality of these optical assessments, thus limiting the practical applicability of these techniques. In this paper, the application of Synthetic Aperture Radar (SAR) imagery is investigated and a novel approach to SAR-based damage assessment is presented. Coherent Change Detection (CCD) algorithms on multiple interferometrically pre-processed SAR images of the area affected by the seismic event are exploited to automatically detect potential damages to buildings and other physical structures. As a case study, the 2016 Central Italy earthquake involving the cities of Amatrice and Accumoli was selected. The main contribution of the research outlined above is the integration of a complex process, requiring the coordination of a variety of methods and tools, into a unitary framework, which allows end-to-end application of the approach from SAR data pre-processing to result visualization in a Geographic Information System (GIS). A prototype of this pipeline was implemented, and the outcomes of this methodology were validated through an extended comparison with traditional damage assessment maps, created through photo-interpretation of high resolution aerial imagery. The results indicate that the proposed methodology is able to perform damage detection with a good level of accuracy, as most of the detected points of change are concentrated around highly damaged buildings.

  10. "I'll be your cigarette--light me up and get on with it": examining smoking imagery on YouTube.

    PubMed

    Forsyth, Susan R; Malone, Ruth E

    2010-08-01

    Smoking imagery on the online video sharing site YouTube is prolific and easily accessed. However, no studies have examined how this content changes across time. We studied the primary message and genre of YouTube videos about smoking across two time periods. In May and July 2009, we used "cigarettes" and "smoking cigarettes" to retrieve the top 20 videos on YouTube by relevance and view count. Eliminating duplicates, 124 videos were coded for time period, overall message, genre, and brand mentions. Data were analyzed using descriptive statistics. Videos portraying smoking positively far outnumbered smoking-negative videos in both samples, increasing as a percentage of total views across the time period. Fifty-eight percent of videos in the second sample were new. Among smoking-positive videos, music and magic tricks were most numerous, increasing from 66% to nearly 80% in July, with music accounting for most of the increase. Marlboro was the most frequently mentioned brand. Videos portraying smoking positively predominate on YouTube, and this pattern persists across time. Tobacco control advocates could use YouTube more effectively to counterbalance prosmoking messages.

  11. Acceptable bit-rates for human face identification from CCTV imagery

    NASA Astrophysics Data System (ADS)

    Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker

    2013-01-01

    The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.

  12. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle

    DOE PAGES

    Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; ...

    2012-09-17

    During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis.more » The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).« less

  13. Low aerial imagery - an assessment of georeferencing errors and the potential for use in environmental inventory

    NASA Astrophysics Data System (ADS)

    Smaczyński, Maciej; Medyńska-Gulij, Beata

    2017-06-01

    Unmanned aerial vehicles are increasingly being used in close range photogrammetry. Real-time observation of the Earth's surface and the photogrammetric images obtained are used as material for surveying and environmental inventory. The following study was conducted on a small area (approximately 1 ha). In such cases, the classical method of topographic mapping is not accurate enough. The geodetic method of topographic surveying, on the other hand, is an overly precise measurement technique for the purpose of inventorying the natural environment components. The author of the following study has proposed using the unmanned aerial vehicle technology and tying in the obtained images to the control point network established with the aid of GNSS technology. Georeferencing the acquired images and using them to create a photogrammetric model of the studied area enabled the researcher to perform calculations, which yielded a total root mean square error below 9 cm. The performed comparison of the real lengths of the vectors connecting the control points and their lengths calculated on the basis of the photogrammetric model made it possible to fully confirm the RMSE calculated and prove the usefulness of the UAV technology in observing terrain components for the purpose of environmental inventory. Such environmental components include, among others, elements of road infrastructure, green areas, but also changes in the location of moving pedestrians and vehicles, as well as other changes in the natural environment that are not registered on classical base maps or topographic maps.

  14. The influence of the in situ camera calibration for direct georeferencing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Barrios, R.; Centeno, J.

    2014-11-01

    The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs

  15. Aerial photo SBVC1962". Photo no. 360. Low oblique aerial view ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Aerial photo -SBVC-1962". Photo no. 360. Low oblique aerial view of the campus, looking southeast. Stamped on the rear: "Ron Wilhite, Sun-Telegram photo, file, 10/22/62/ - San Bernardino Valley College, 701 South Mount Vernon Avenue, San Bernardino, San Bernardino County, CA

  16. Dhaksha, the Unmanned Aircraft System in its New Avatar-Automated Aerial Inspection of INDIA'S Tallest Tower

    NASA Astrophysics Data System (ADS)

    Kumar, K. S.; Rasheed, A. Mohamed; Krishna Kumar, R.; Giridharan, M.; Ganesh

    2013-08-01

    DHAKSHA, the unmanned aircraft system (UAS), developed after several years of research by Division of Avionics, Department of Aerospace Engineering, MIT Campus of Anna University has recently proved its capabilities during May 2012 Technology demonstration called UAVforge organised by Defence Research Project Agency, Department of Defence, USA. Team Dhaksha with its most stable design outperformed all the other contestants competing against some of the best engineers from prestigi ous institutions across the globe like Middlesex University from UK, NTU and NUS from Singapore, Tudelft Technical University, Netherlands and other UAV industry participants in the world's toughest UAV challenge. This has opened up an opportunity for Indian UAVs making a presence in the international scenario as well. In furtherance to the above effort at Fort Stewart military base at Georgia,USA, with suitable payloads, the Dhaksha team deployed the UAV in a religious temple festival during November 2012 at Thiruvannamalai District for Tamil Nadu Police to avail the instant aerial imagery services over the crowd of 10 lakhs pilgrims and also about the investigation of the structural strength of the India's tallest structure, the 300 m RCC tower during January 2013. The developed system consists of a custom-built Rotary Wing model with on-board navigation, guidance and control systems (NGC) and ground control station (GCS), for mission planning, remote access, manual overrides and imagery related computations. The mission is to fulfill the competition requirements by using an UAS capable of providing complete solution for the stated problem. In this work the effort to produce multirotor unmanned aerial systems (UAS) for civilian applications at the MIT, Avionics Laboratory is presented

  17. Wetland Vegetation Integrity Assessment with Low Altitude Multispectral Uav Imagery

    NASA Astrophysics Data System (ADS)

    Boon, M. A.; Tesfamichael, S.

    2017-08-01

    The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs) was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17 ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera position's and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetland's structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified ("D" PES Category) and that the condition is expected to

  18. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    PubMed

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  19. Environmental Remote Sensing Analysis Using Open Source Virtual Earths and Public Domain Imagery

    NASA Astrophysics Data System (ADS)

    Pilant, A. N.; Worthy, L. D.

    2008-12-01

    Human activities increasingly impact natural environments. Globally, many ecosystems are stressed to unhealthy limits, leading to loss of valuable ecosystem services- economic, ecologic and intrinsic. Virtual earths (virtual globes) (e.g., NASA World Wind, ossimPlanet, ArcGIS Explorer, Google Earth, Microsoft Virtual Earth) are geospatial data integration tools that can aid our efforts to understand and protect the environment. Virtual earths provide unprecedented desktop views of our planet, not only to professional scientists, but also to citizen scientists, students, environmental stewards, decision makers, urban developers and planners. Anyone with a broadband internet connection can explore the planet virtually, due in large part to freely available open source software and public domain imagery. This has at least two important potential benefits. One, individuals can study the planet from the visually intuitive perspective of the synoptic aerial view, promoting environmental awareness and stewardship. Two, it opens up the possibility of harnessing the in situ knowledge and observations of citizen scientists familiar with landscape conditions in their locales. Could this collective knowledge be harnessed (crowd sourcing) to validate and quality assure land cover and other maps? In this presentation we present examples using public domain imagery and two open source virtual earths to highlight some of the functionalities currently available. OssimPlanet is used to view aerial data from the USDA Geospatial Data Gateway. NASA World Wind is used to extract georeferenced high resolution USGS urban area orthoimagery. ArcGIS Explorer is used to demonstrate an example of image analysis using web processing services. The research presented here was conducted under the Environmental Feature Finder project of the Environmental Protection Agency's Advanced Monitoring Initiative. Although this work was reviewed by EPA and approved for publication, it may not necessarily

  20. Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles

    PubMed Central

    Gökçe, Fatih; Üçoluk, Göktürk; Şahin, Erol; Kalkan, Sinan

    2015-01-01

    Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) the detection of intruder mUAVs in protected environments; (ii) sense and avoid purposes on mUAVs or on other aerial vehicles and (iii) multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG) and local binary patterns (LBP) using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032×778 resolution) and 150 ms outdoors (1280×720 resolution) per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms. PMID:26393599

  1. The Integration of the Naval Unmanned Combat Aerial System (N-UCAS) into the Future Naval Air Wing

    DTIC Science & Technology

    2009-12-01

    5 Table 1. Aircraft Combat Radius from World War II (WWII) Through 1990s6 Period  Airframe  Distance  WW2   F6F  400nm     TBF  400nm     SB2C...override the computers, take control, and guide his two bombs to target by infrared video imagery. Otherwise, our auto piloted computer was programmed

  2. Violence against women in video games: a prequel or sequel to rape myth acceptance?

    PubMed

    Beck, Victoria Simpson; Boys, Stephanie; Rose, Christopher; Beck, Eric

    2012-10-01

    Current research suggests a link between negative attitudes toward women and violence against women, and it also suggests that media may condition such negative attitudes. When considering the tremendous and continued growth of video game sales, and the resulting proliferation of sexual objectification and violence against women in some video games, it is lamentable that there is a dearth of research exploring the effect of such imagery on attitudes toward women. This study is the first study to use actual video game playing and control for causal order, when exploring the effect of sexual exploitation and violence against women in video games on attitudes toward women. By employing a Solomon Four-Group experimental research design, this exploratory study found that a video game depicting sexual objectification of women and violence against women resulted in statistically significant increased rape myths acceptance (rape-supportive attitudes) for male study participants but not for female participants.

  3. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  4. Using video self- and peer modeling to facilitate reading fluency in children with learning disabilities.

    PubMed

    Decker, Martha M; Buggey, Tom

    2014-01-01

    The authors compared the effects of video self-modeling and video peer modeling on oral reading fluency of elementary students with learning disabilities. A control group was also included to gauge general improvement due to reading instruction and familiarity with researchers. The results indicated that both interventions resulted in improved fluency. Students in both experimental groups improved their reading fluency. Two students in the self-modeling group made substantial and immediate gains beyond any of the other students. Discussion is included that focuses on the importance that positive imagery can have on student performance and the possible applications of both forms of video modeling with students who have had negative experiences in reading.

  5. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  6. Analysis by gender and Visual Imagery Reactivity of conventional and imagery Rorschach.

    PubMed

    Yanovski, A; Menduke, H; Albertson, M G

    1995-06-01

    Examined here are the effects of gender and Visual Imagery Reactivity in 80 consecutively selected psychiatric outpatients. The participants were grouped by gender and by the amounts of responsiveness to preceding therapy work using imagery (Imagery Nonreactors and Reactors). In the group of Imagery Nonreactors were 13 men and 22 women, and in the Reactor group were 17 men and 28 women. Compared were the responses to standard Rorschach (Conventional condition) with visual associations to memory images of Rorschach inkblots (Imagery condition). Responses were scored using the Visual Imagery Reactivity (VIR) scoring system, a general, test-nonspecific scoring method. Nonparametric statistical analysis showed that critical indicators of Imagery Reactivity encoded as High Affect/Conflict score and its derivatives associated with sexual or bizarre content were not significantly associated with gender; neither was Neutral Content score which categorizes "non-Reactivity." These results support the notion that system's criteria of Visual Imagery Reactivity can be applied equally to both men and women for the classification of Imagery Reactors and Nonreactors. Discussed are also the speculative consequences of extending the tolerance range of significance levels for the interaction between Reactivity and sex above the customary limit of p < .05 in borderline cases. The results of such an analysis may imply a trend towards more rigid defensiveness under Imagery and toward lesser verbal productivity in response to either the Conventional or the Imagery task among women who are Nonreactors. In Reactors, men produced significantly more Sexual Reference scores (in the subcategory not associated with High Affect/Conflict) than women, but this could be attributed to the effect of tester's and subjects' gender combined.

  7. Interpretation of Passive Microwave Imagery of Surface Snow and Ice: Harding Lake, Alaska

    DTIC Science & Technology

    1991-06-01

    Circle conditions in microwave imagery depends on the char- (Fig. 1). The lake is roughly circular in shape and has a acteristics of the sensor system...local oscillator frequency 33.6 0Hz IF bandwidth Greaterthan 500 MHz cracks in the ice sheet. The incursion process is de - video bandwidth 1.7 kHz...using pas- surface snow had oct.urred on these similarly sized sive microwave sensors . IEEE/Transactions on Geo- lakes. Additional field verifications

  8. Development of Virtual Field Experiences for undergraduate geoscience using 3D models from aerial drone imagery and other data

    NASA Astrophysics Data System (ADS)

    Karchewski, B.; Dolphin, G.; Dutchak, A.; Cooper, J.

    2017-12-01

    In geoscience one must develop important skills related to data collection, analysis and interpretation in the field. The quadrupling of student enrollment in geoscience at the University of Calgary in recent years presents a unique challenge in providing field experience. With introductory classes ranging from 300-500 students, field trips are logistical impossibilities and the impact on the quality of student learning and engagement is major and negative. Field experience is fundamental to geoscience education, but is presently lacking prior to the third year curriculum. To mitigate the absence of field experience in the introductory curricula, we are developing a set of Virtual Field Experiences (VFEs) that approximate field experiences via inquiry-based exploration of geoscientific principles. We incorporate a variety of data into the VFEs including gigapan photographs, geologic maps and high resolution 3D models constructed from aerial drone imagery. We link the data using a web-based platform to support lab exercises guided by a set of inquiry questions. An important feature that distinguishes a VFE is that students explore the data in a nonlinear fashion to construct and revise models that explain the nature of the field site. The aim is to approximate an actual field experience rather than provide a virtual guided tour where the explanation of the site comes pre-packaged. Thus far, our group has collected data at three sites in Southern Alberta: Mt. Yamnuska, Drumheller environs and the North Saskatchewan River valley near the toe of the Saskatchewan Glacier. The Mt. Yamnuska site focusses on a prominent thrust fault in the front ranges of the Western Cordillera. The Drumheller environs site demonstrates the siliciclastic sedimentation and stratigraphy typical of southeastern Alberta. The Saskatchewan Glacier site highlights periglacial geomorphology and glacial recession. All three sites were selected because they showcase a broad range of geoscientific

  9. Early aerial photography and contributions to Digital Earth - The case of the 1921 Halifax air survey mission in Canada

    NASA Astrophysics Data System (ADS)

    Werle, D.

    2016-04-01

    This paper presents research into the military and civilian history, technological development, and practical outcomes of aerial photography in Canada immediately after the First World War. The collections of early aerial photography in Canada and elsewhere, as well as the institutional and practical circumstances and arrangements of their creation, represent an important part of remote sensing heritage. It is argued that the digital rendition of the air photos and their representation in mosaic form can make valuable contributions to Digital Earth historic inquiries and mapping exercises today. An episode of one of the first urban surveys, carried out over Halifax, Nova Scotia, in 1921, is highlighted and an air photo mosaic and interpretation key is presented. Using the almost one hundred year old air photos and a digitally re-assembled mosaic of a substantial portion of that collection as a guide, a variety of features unique to the post-war urban landscape of the Halifax peninsula are analysed, illustrated, and compared with records of past and current land use. The pan-chromatic air photo ensemble at a nominal scale of 1:5,000 is placed into the historical context with contemporary thematic maps, recent air photos, and modern satellite imagery. Further research opportunities and applications concerning early Canadian aerial photography are outlined.

  10. “I'll be your cigarette—Light me up and get on with it”: Examining smoking imagery on YouTube

    PubMed Central

    Forsyth, Susan R.

    2010-01-01

    Introduction: Smoking imagery on the online video sharing site YouTube is prolific and easily accessed. However, no studies have examined how this content changes across time. We studied the primary message and genre of YouTube videos about smoking across two time periods. Methods: In May and July 2009, we used “cigarettes” and “smoking cigarettes” to retrieve the top 20 videos on YouTube by relevance and view count. Eliminating duplicates, 124 videos were coded for time period, overall message, genre, and brand mentions. Data were analyzed using descriptive statistics. Results: Videos portraying smoking positively far outnumbered smoking-negative videos in both samples, increasing as a percentage of total views across the time period. Fifty-eight percent of videos in the second sample were new. Among smoking-positive videos, music and magic tricks were most numerous, increasing from 66% to nearly 80% in July, with music accounting for most of the increase. Marlboro was the most frequently mentioned brand. Discussion: Videos portraying smoking positively predominate on YouTube, and this pattern persists across time. Tobacco control advocates could use YouTube more effectively to counterbalance prosmoking messages. PMID:20634267

  11. Using Google Streetview Panoramic Imagery for Geoscience Education

    NASA Astrophysics Data System (ADS)

    De Paor, D. G.; Dordevic, M. M.

    2014-12-01

    Google Streetview is a feature of Google Maps and Google Earth that allows viewers to switch from map or satellite view to 360° panoramic imagery recorded close to the ground. Most panoramas are recorded by Google engineers using special cameras mounted on the roofs of cars. Bicycles, snowmobiles, and boats have also been used and sometimes the camera has been mounted on a backpack for off-road use by hikers and skiers or attached to scuba-diving gear for "Underwater Streetview (sic)." Streetview panoramas are linked together so that the viewer can change viewpoint by clicking forward and reverse buttons. They therefore create a 4-D touring effect. As part of the GEODE project ("Google Earth for Onsite and Distance Education"), we are experimenting with the use of Streetview imagery for geoscience education. Our web-based test application allows instructors to select locations for students to study. Students are presented with a set of questions or tasks that they must address by studying the panoramic imagery. Questions include identification of rock types, structures such as faults, and general geological setting. The student view is locked into Streetview mode until they submit their answers, whereupon the map and satellite views become available, allowing students to zoom out and verify their location on Earth. Student learning is scaffolded by automatic computerized feedback. There are lots of existing Streetview panoramas with rich geological content. Additionally, instructors and members of the general public can create panoramas, including 360° Photo Spheres, by stitching images taken with their mobiles devices and submitting them to Google for evaluation and hosting. A multi-thousand-dollar, multi-directional camera and mount can be purchased from DIY-streetview.com. This allows power users to generate their own high-resolution panoramas. A cheaper, 360° video camera is soon to be released according to geonaute.com. Thus there are opportunities for

  12. AERIAL MEASURING SYSTEM IN JAPAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Craig; Colton, David

    2012-01-01

    The U.S. Department of Energy National Nuclear Security Agency’s Aerial Measuring System deployed personnel and equipment to partner with the U.S. Air Force in Japan to conduct multiple aerial radiological surveys. These were the first and most comprehensive sources of actionable information for U.S. interests in Japan and provided early confirmation to the government of Japan as to the extent of the release from the Fukushima Daiichi Nuclear Power Generation Station. Many challenges were overcome quickly during the first 48 hours; including installation and operation of Aerial Measuring System equipment on multiple U.S. Air Force Japan aircraft, flying over difficultmore » terrain, and flying with talented pilots who were unfamiliar with the Aerial Measuring System flight patterns. These all combined to make for a dynamic and non-textbook situation. In addition, the data challenges of the multiple and on-going releases, and integration with the Japanese government to provide valid aerial radiological survey products that both military and civilian customers could use to make informed decisions, was extremely complicated. The Aerial Measuring System Fukushima response provided insight in addressing these challenges and gave way to an opportunity for the expansion of the Aerial Measuring System’s mission beyond the borders of the US.« less

  13. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    PubMed

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  14. Crop identification and acreage measurement utilizing ERTS imagery

    NASA Technical Reports Server (NTRS)

    Vonsteen, D. H. (Principal Investigator)

    1972-01-01

    There are no author-identified significant results in this report. The microdensitometer will be used to analyze data acquired by ERTS-1 imagery. The classification programs and software packages have been acquired and are being prepared for use with the information as it is received. Photo and digital tapes have been acquired for coverage of virtually 100 percent of the test site areas. These areas are located in South Dakota, Idaho, Missouri, and Kansas. Hass 70mm color infrared, infrared, black and white high altitude aerial photography of the test sites is available. Collection of ground truth for updating the data base has been completed and a computer program written to count the number of fields and give total acres by size group for the segments in each test site. Results are given of data analysis performed on digitized data from densitometer measurements of fields of corn, sugar, beets, and alfalfa in Kansas.

  15. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  16. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  17. The potential of unmanned aerial systems for sea turtle research and conservation: a review and future directions

    USGS Publications Warehouse

    Rees, Alan F.; Avens, Larisa; Ballorain, Katia; Bevan, Elizabeth; Broderick, Annette C.; Carthy, Raymond R.; Christianen, Marjolijn J. A.; Duclos, Gwénaël; Heithaus, Michael R.; Johnston, David W.; Mangel, Jeffrey C.; Paladino, Frank V.; Pendoley, Kellie; Reina, Richard D.; Robinson, Nathan J.; Ryan, Robert; Sykora-Bodie, Seth T.; Tilley, Dominic; Varela, Miguel R.; Whitman, Elizabeth R.; Whittock, Paul A.; Wibbels, Thane; Godley, Brendan J.

    2018-01-01

    The use of satellite systems and manned aircraft surveys for remote data collection has been shown to be transformative for sea turtle conservation and research by enabling the collection of data on turtles and their habitats over larger areas than can be achieved by surveys on foot or by boat. Unmanned aerial vehicles (UAVs) or drones are increasingly being adopted to gather data, at previously unprecedented spatial and temporal resolutions in diverse geographic locations. This easily accessible, low-cost tool is improving existing research methods and enabling novel approaches in marine turtle ecology and conservation. Here we review the diverse ways in which incorporating inexpensive UAVs may reduce costs and field time while improving safety and data quality and quantity over existing methods for studies on turtle nesting, at-sea distribution and behaviour surveys, as well as expanding into new avenues such as surveillance against illegal take. Furthermore, we highlight the impact that high-quality aerial imagery captured by UAVs can have for public outreach and engagement. This technology does not come without challenges. We discuss the potential constraints of these systems within the ethical and legal frameworks which researchers must operate and the difficulties that can result with regard to storage and analysis of large amounts of imagery. We then suggest areas where technological development could further expand the utility of UAVs as data-gathering tools; for example, functioning as downloading nodes for data collected by sensors placed on turtles. Development of methods for the use of UAVs in sea turtle research will serve as case studies for use with other marine and terrestrial taxa.

  18. Use of Aerial high resolution visible imagery to produce large river bathymetry: a multi temporal and spatial study over the by-passed Upper Rhine

    NASA Astrophysics Data System (ADS)

    Béal, D.; Piégay, H.; Arnaud, F.; Rollet, A.; Schmitt, L.

    2011-12-01

    Aerial high resolution visible imagery allows producing large river bathymetry assuming that water depth is related to water colour (Beer-Bouguer-Lambert law). In this paper we aim at monitoring Rhine River geometry changes for a diachronic study as well as sediment transport after an artificial injection (25.000 m3 restoration operation). For that a consequent data base of ground measurements of river depth is used, built on 3 different sources: (i) differential GPS acquisitions, (ii) sounder data and (iii) lateral profiles realized by experts. Water depth is estimated using a multi linear regression over neo channels built on a principal component analysis over red, green and blue bands and previously cited depth data. The study site is a 12 km long reach of the by-passed section of the Rhine River that draws French and German border. This section has been heavily impacted by engineering works during the last two centuries: channelization since 1842 for navigation purposes and the construction of a 45 km long lateral canal and 4 consecutive hydroelectric power plants of since 1932. Several bathymetric models are produced based on 3 different spatial resolutions (6, 13 and 20 cm) and 5 acquisitions (January, March, April, August and October) since 2008. Objectives are to find the optimal spatial resolution and to characterize seasonal effects. Best performances according to the 13 cm resolution show a 18 cm accuracy when suspended matters impacted less water transparency. Discussions are oriented to the monitoring of the artificial reload after 2 flood events during winter 2010-2011. Bathymetric models produced are also useful to build 2D hydraulic model's mesh.

  19. Sherlock Holmes' or Don Quixote`s certainty? Interpretations of cropmarks on satellite imageries in archaeological investigation

    NASA Astrophysics Data System (ADS)

    Wilgocka, Aleksandra; RÄ czkowski, Włodzimierz; Kostyrko, Mikołaj; Ruciński, Dominik

    2016-08-01

    Years of experience in air-photo interpretations provide us to conclusion that we know what we are looking at, we know why we can see cropmarks, we even can estimate, when are the best opportunities to observe them. But even today cropmarks may be a subject of misinterpretation or wishful thinking. The same problems appear when working with aerial photographs, satellite imageries, ALS, geophysics, etc. In the paper we present several case studies based on data acquired for and within ArchEO - archaeological applications of Earth Observation techniques project to discuss complexity and consequences of archaeological interpretations. While testing usefulness of satellite imagery in Poland on various types of sites, cropmarks were the most frequent indicators of past landscapes as well as archaeological and natural features. Hence, new archaeological sites have been discovered mainly thanks to cropmarks. This situation has given us an opportunity to test not only satellite imageries as a source of data but also confront them with results of other non-invasive methods of data acquisition. When working with variety of data we have met several issues which raised problems of interpretation. Consequently, questions related to the cognitive value of remote sensing data appear and should be discussed. What do the data represent? To what extent the imageries, cropmarks or other visualizations represent the past? How should we deal with ambiguity of data? What can we learn from pitfalls in the interpretation of cropmarks, soilmarks etc. to share more Sherlock's methodology rather than run around Don Quixote's delusions?

  20. A scheme for the uniform mapping and monitoring of earth resources and environmental complexes using ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Poulton, C. E. (Principal Investigator); Welch, R. I.

    1973-01-01

    There are no author-identified significant results in this report. Progress on plans for the development and testing of a practical procedure and system for the uniform mapping and monitoring of natural ecosystems and environmental complexes from space-acquired imagery is discussed. With primary emphasis on ERTS-1 imagery, but supported by appropriate aircraft photography as necessary, the objectives are to accomplish the following: (1) Develop and test in a few selected sites and areas of the western United States a standard format for an ecological and land use legend for making natural resource inventories on a simulated global basis. (2) Based on these same limited geographic areas, identify the potentialities and limitations of the legend concept for the recognition and annotation of ecological analogs and environmental complexes. An additional objective is to determine the optimum combination of space photography, aerial photography, ground data, human data analysis, and automatic data analysis for estimating crop yield in the rice growing areas of California and Louisiana.

  1. Convergence in full motion video processing, exploitation, and dissemination and activity based intelligence

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Lewis, Gina

    2012-06-01

    Over the last decade, intelligence capabilities within the Department of Defense/Intelligence Community (DoD/IC) have evolved from ad hoc, single source, just-in-time, analog processing; to multi source, digitally integrated, real-time analytics; to multi-INT, predictive Processing, Exploitation and Dissemination (PED). Full Motion Video (FMV) technology and motion imagery tradecraft advancements have greatly contributed to Intelligence, Surveillance and Reconnaissance (ISR) capabilities during this timeframe. Imagery analysts have exploited events, missions and high value targets, generating and disseminating critical intelligence reports within seconds of occurrence across operationally significant PED cells. Now, we go beyond FMV, enabling All-Source Analysts to effectively deliver ISR information in a multi-INT sensor rich environment. In this paper, we explore the operational benefits and technical challenges of an Activity Based Intelligence (ABI) approach to FMV PED. Existing and emerging ABI features within FMV PED frameworks are discussed, to include refined motion imagery tools, additional intelligence sources, activity relevant content management techniques and automated analytics.

  2. Flame filtering and perimeter localization of wildfires using aerial thermal imagery

    NASA Astrophysics Data System (ADS)

    Valero, Mario M.; Verstockt, Steven; Rios, Oriol; Pastor, Elsa; Vandecasteele, Florian; Planas, Eulàlia

    2017-05-01

    Airborne thermal infrared (TIR) imaging systems are being increasingly used for wild fire tactical monitoring since they show important advantages over spaceborne platforms and visible sensors while becoming much more affordable and much lighter than multispectral cameras. However, the analysis of aerial TIR images entails a number of difficulties which have thus far prevented monitoring tasks from being totally automated. One of these issues that needs to be addressed is the appearance of flame projections during the geo-correction of off-nadir images. Filtering these flames is essential in order to accurately estimate the geographical location of the fuel burning interface. Therefore, we present a methodology which allows the automatic localisation of the active fire contour free of flame projections. The actively burning area is detected in TIR georeferenced images through a combination of intensity thresholding techniques, morphological processing and active contours. Subsequently, flame projections are filtered out by the temporal frequency analysis of the appropriate contour descriptors. The proposed algorithm was tested on footages acquired during three large-scale field experimental burns. Results suggest this methodology may be suitable to automatise the acquisition of quantitative data about the fire evolution. As future work, a revision of the low-pass filter implemented for the temporal analysis (currently a median filter) was recommended. The availability of up-to-date information about the fire state would improve situational awareness during an emergency response and may be used to calibrate data-driven simulators capable of emitting short-term accurate forecasts of the subsequent fire evolution.

  3. The sky is the limit: reconstructing physical geography fieldwork from an aerial perspective

    NASA Astrophysics Data System (ADS)

    Williams, R.; Tooth, S.; Gibson, M.; Barrett, B.

    2017-12-01

    In an era of rapid geographical data acquisition, interpretations of remote sensing products (e.g. aerial photographs, satellite images, digital elevation models) are an integral part of many undergraduate geography degree schemes but there are fewer opportunities for collection and processing of primary remote sensing data. Unmanned aerial vehicles (UAVs) provide a relatively cheap opportunity to introduce the principles and practice of airborne remote sensing into fieldcourses, enabling students to learn about image acquisition, data processing and interpretation of derived products. Three case studies illustrate how a low cost DJI Phantom UAV can be used by students to acquire images that can be processed using off the shelf Structure-from-Motion photogrammetry software. Two case studies are drawn from an international fieldcourse that takes students to field sites that are the focus of current funded research whilst a third case study is from a course in topographic mapping. Results from a student questionnaire and analysis of assessed student reports showed that using UAVs in fieldwork enhanced student engagement with themes on their fieldcourse and equipped them with data processing skills. The derivation of bespoke orthophotos and Digital Elevation Models also provided students with opportunities to gain insight into the various data quality issues that are associated with aerial imagery acquisition and topographic reconstruction, although additional training is required to maximise this potential. Recognition of the successes and limitations of this teaching intervention provides scope for improving exercises that use UAVs and other technologies in future fieldcourses. UAVs are enabling both a reconstruction of how we measure the Earth's surface and a reconstruction of how students do fieldwork.

  4. Coupled auralization and virtual video for immersive multimedia displays

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  5. Unmanned Aerial Vehicle to Estimate Nitrogen Status of Turfgrasses

    PubMed Central

    Corniglia, Matteo; Gaetani, Monica; Grossi, Nicola; Magni, Simone; Migliazzi, Mauro; Angelini, Luciana; Mazzoncini, Marco; Silvestri, Nicola; Fontanelli, Marco; Raffaelli, Michele; Peruzzi, Andrea; Volterrani, Marco

    2016-01-01

    Spectral reflectance data originating from Unmanned Aerial Vehicle (UAV) imagery is a valuable tool to monitor plant nutrition, reduce nitrogen (N) application to real needs, thus producing both economic and environmental benefits. The objectives of the trial were i) to compare the spectral reflectance of 3 turfgrasses acquired via UAV and by a ground-based instrument; ii) to test the sensitivity of the 2 data acquisition sources in detecting induced variation in N levels. N application gradients from 0 to 250 kg ha-1 were created on 3 different turfgrass species: Cynodon dactylon x transvaalensis (Cdxt) ‘Patriot’, Zoysia matrella (Zm) ‘Zeon’ and Paspalum vaginatum (Pv) ‘Salam’. Proximity and remote-sensed reflectance measurements were acquired using a GreenSeeker handheld crop sensor and a UAV with onboard a multispectral sensor, to determine Normalized Difference Vegetation Index (NDVI). Proximity-sensed NDVI is highly correlated with data acquired from UAV with r values ranging from 0.83 (Zm) to 0.97 (Cdxt). Relating NDVI-UAV with clippings N, the highest r is for Cdxt (0.95). The most reactive species to N fertilization is Cdxt with a clippings N% ranging from 1.2% to 4.1%. UAV imagery can adequately assess the N status of turfgrasses and its spatial variability within a species, so for large areas, such as golf courses, sod farms or race courses, UAV acquired data can optimize turf management. For relatively small green areas, a hand-held crop sensor can be a less expensive and more practical option. PMID:27341674

  6. The sky is the limit? 20 years of small-format aerial photography taken from UAS for monitoring geomorphological processes

    NASA Astrophysics Data System (ADS)

    Marzolff, Irene

    2014-05-01

    One hundred years after the first publication on aerial photography taken from unmanned aerial platforms (Arthur Batut 1890), small-format aerial photography (SFAP) became a distinct niche within remote sensing during the 1990s. Geographers, plant biologists, archaeologists and other researchers with geospatial interests re-discovered the usefulness of unmanned platforms for taking high-resolution, low-altitude photographs that could then be digitized and analysed with geographical information systems, (softcopy) photogrammetry and image processing techniques originally developed for digital satellite imagery. Even before the ubiquity of digital consumer-grade cameras and 3D analysis software accessible to the photogrammetric layperson, do-it-yourself remote sensing using kites, blimps, drones and micro air vehicles literally enabled the questing researcher to get their own pictures of the world. As a flexible, cost-effective method, SFAP offered images with high spatial and temporal resolutions that could be ideally adapted to the scales of landscapes, forms and distribution patterns to be monitored. During the last five years, this development has been significantly accelerated by the rapid technological advancements of GPS navigation, autopiloting and revolutionary softcopy-photogrammetry techniques. State-of-the-art unmanned aerial systems (UAS) now allow automatic flight planning, autopilot-controlled aerial surveys, ground control-free direct georeferencing and DEM plus orthophoto generation with centimeter accuracy, all within the space of one day. The ease of use of current UAS and processing software for the generation of high-resolution topographic datasets and spectacular visualizations is tempting and has spurred the number of publications on these issues - but which advancements in our knowledge and understanding of geomorphological processes have we seen and can we expect in the future? This presentation traces the development of the last two decades

  7. Using Digital Time-Lapse Videos to Teach Geomorphic Processes to Undergraduates

    NASA Astrophysics Data System (ADS)

    Clark, D. H.; Linneman, S. R.; Fuller, J.

    2004-12-01

    We demonstrate the use of relatively low-cost, computer-based digital imagery to create time-lapse videos of two distinct geomorphic processes in order to help students grasp the significance of the rates, styles, and temporal dependence of geologic phenomena. Student interviews indicate that such videos help them to understand the relationship between processes and landform development. Time-lapse videos have been used extensively in some sciences (e.g., biology - http://sbcf.iu.edu/goodpract/hangarter.html, meteorology - http://www.apple.com/education/hed/aua0101s/meteor/, chemistry - http://www.chem.yorku.ca/profs/hempsted/chemed/home.html) to demonstrate gradual processes that are difficult for many students to visualize. Most geologic processes are slower still, and are consequently even more difficult for students to grasp, yet time-lapse videos are rarely used in earth science classrooms. The advent of inexpensive web-cams and computers provides a new means to explore the temporal dimension of earth surface processes. To test the use of time-lapse videos in geoscience education, we are developing time-lapse movies that record the evolution of two landforms: a stream-table delta and a large, natural, active landslide. The former involves well-known processes in a controlled, repeatable laboratory experiment, whereas the latter tracks the developing dynamics of an otherwise poorly understood slope failure. The stream-table delta is small and grows in ca. 2 days; we capture a frame on an overhead web-cam every 3 minutes. Before seeing the video, students are asked to hypothesize how the delta will grow through time. The final time-lapse video, ca. 20-80 MB, elegantly shows channel migration, progradation rates, and formation of major geomorphic elements (topset, foreset, bottomset beds). The web-cam can also be "zoomed-in" to show smaller-scale processes, such as bedload transfer, and foreset slumping. Post-lab tests and interviews with students indicate that

  8. Detection and tracking of gas plumes in LWIR hyperspectral video sequence data

    NASA Astrophysics Data System (ADS)

    Gerhart, Torin; Sunu, Justin; Lieu, Lauren; Merkurjev, Ekaterina; Chang, Jen-Mei; Gilles, Jérôme; Bertozzi, Andrea L.

    2013-05-01

    Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.

  9. The Functional Equivalence between Movement Imagery, Observation, and Execution Influences Imagery Ability

    ERIC Educational Resources Information Center

    Williams, Sarah E.; Cumming, Jennifer; Edwards, Martin G.

    2011-01-01

    Based on literature identifying movement imagery, observation, and execution to elicit similar areas of neural activity, research has demonstrated that movement imagery and observation successfully prime movement execution. To investigate whether movement and observation could prime ease of imaging from an external visual-imagery perspective, an…

  10. LiDAR data and SAR imagery acquired by an unmanned helicopter for rapid landslide investigation

    NASA Astrophysics Data System (ADS)

    Kasai, M.; Tanaka, Y.; Yamazaki, T.

    2012-12-01

    When earthquakes or heavy rainfall hits a landslide prone area, initial actions require estimation of the size of damage to people and infrastructure. This includes identifying the number and size of newly collapsed or expanded landslides, and appraising subsequent risks from remobilization of landslides and debris materials. In inapproachable areas, the UAV (Unmanned Aerial Vehicles) is likely to be of greatest use. In addition, repeat monitoring of sites after the event is a way of utilizing UAVs, particularly in terms of cost and convenience. In this study, LiDAR (SkEyesBox MP-1) data and SAR (Nano SAR) imagery, acquired over 0.5 km2 landslide prone area, are presented to assess the practicability of using unmanned helicopters (in this case a 10 year old YAMAHA RMAX G1) in these situations. LiDAR data was taken in July 2012, when tree foliage covered the ground surface. However, imagery was of sufficient quality to identify and measure landslide features. Nevertheless, LiDAR data obtained by a manned helicopter in the same area in August 2008 was more detailed, reflecting the function of the LiDAR scanner. On the other hand, 2 m resolution Nano SAR imagery produced reasonable results to elucidate hillslope condition. A quick method for data processing without loss of image quality was also investigated. In conclusion, the LiDAR scanner and UAV employed here could be used to plan immediate remedial activity of the area, before LiDAR measurement with a manned helicopter can be organized. SAR imagery from UAV is also available for this initial activity, and can be further applied to long term monitoring.

  11. Concept of a digital aerial platform for conducting observation flights under the open skies treaty. (Polish Title: Koncepcja cyfrowej platformy lotniczej do realizacji misji obserwacyjnych w ramach traktatu o otwartych przestworzach)

    NASA Astrophysics Data System (ADS)

    Walczykowski, P.; Orych, A.

    2013-12-01

    The Treaty on Open Skies, to which Poland is a signatory from the very beginning, was signed in 1992 in Helsinki. The main principle of the Treaty is increasing the openness of military activities conducted by the States-Parties and control over respecting disarmament agreements. Responsibilities given by the Treaty are fulfilled by conducting and receiving a given number of observation flights over the territories of the Treaty signatories. Among the 34 countries currently actively taking part in this Treaty only some own certified airplanes and observation sensors. Poland is within the group of countries who do not own their own platform and therefore fulfills Treaty requirements using the Ukrainian An-30b. Primarily, the Treaty only enabled the use of analogue sensors for the acquisition of imagery data. Together with the development of digital techniques, a rise in the need for digital imagery products had been noted. Currently digital photography is being used in almost ass fields of studies and everyday life. This has lead to very rapid developments in digital sensor technologies, employing the newest and most innovative solutions. Digital imagery products have many advantages and have now almost fully replaced traditional film sensors. Digital technologies have given rise to a new era in Open Skies. The Open Skies Consultative Commission, having conducted many series of tests, signed a new Decision to the Treaty, which allows for digital aerial sensors to be used during observation flights. The main aim of this article is to design a concept of choosing digital sensors and selecting an airplane, therefore a digital aerial platform, which could be used by Poland for Open Skies purposes. A thorough analysis of airplanes currently used by the Polish Air force was conducted in terms of their specifications and the possibility of their employment for Open Skies Treaty missions. Next, an analysis was conducted of the latest aerial digital sensors offered by

  12. Operational Use of Remote Sensing within USDA

    NASA Technical Reports Server (NTRS)

    Bethel, Glenn R.

    2007-01-01

    A viewgraph presentation of remote sensing imagery within the USDA is shown. USDA Aerial Photography, Digital Sensors, Hurricane imagery, Remote Sensing Sources, Satellites used by Foreign Agricultural Service, Landsat Acquisitions, and Aerial Acquisitions are also shown.

  13. Estimating plant distance in maize using Unmanned Aerial Vehicle (UAV).

    PubMed

    Zhang, Jinshui; Basso, Bruno; Price, Richard F; Putman, Gregory; Shuai, Guanyuan

    2018-01-01

    Distance between rows and plants are essential parameters that affect the final grain yield in row crops. This paper presents the results of research intended to develop a novel method to quantify the distance between maize plants at field scale using an Unmanned Aerial Vehicle (UAV). Using this method, we can recognize maize plants as objects and calculate the distance between plants. We initially developed our method by training an algorithm in an indoor facility with plastic corn plants. Then, the method was scaled up and tested in a farmer's field with maize plant spacing that exhibited natural variation. The results of this study demonstrate that it is possible to precisely quantify the distance between maize plants. We found that accuracy of the measurement of the distance between maize plants depended on the height above ground level at which UAV imagery was taken. This study provides an innovative approach to quantify plant-to-plant variability and, thereby final crop yield estimates.

  14. A highly sensitive underwater video system for use in turbid aquaculture ponds

    NASA Astrophysics Data System (ADS)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-08-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  15. A highly sensitive underwater video system for use in turbid aquaculture ponds

    PubMed Central

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-01-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health. PMID:27554201

  16. Normalization of satellite imagery

    NASA Technical Reports Server (NTRS)

    Kim, Hongsuk H.; Elman, Gregory C.

    1990-01-01

    Sets of Thematic Mapper (TM) imagery taken over the Washington, DC metropolitan area during the months of November, March and May were converted into a form of ground reflectance imagery. This conversion was accomplished by adjusting the incident sunlight and view angles and by applying a pixel-by-pixel correction for atmospheric effects. Seasonal color changes of the area can be better observed when such normalization is applied to space imagery taken in time series. In normalized imagery, the grey scale depicts variations in surface reflectance and tonal signature of multi-band color imagery can be directly interpreted for quantitative information of the target.

  17. Monitoring Arctic Sea ice using ERTS imagery. [Bering Sea, Beaufort Sea, Canadian Archipelago, and Greenland Sea

    NASA Technical Reports Server (NTRS)

    Barnes, J. C.; Bowley, C. J.

    1974-01-01

    Because of the effect of sea ice on the heat balance of the Arctic and because of the expanding economic interest in arctic oil and other minerals, extensive monitoring and further study of sea ice is required. The application of ERTS data for mapping ice is evaluated for several arctic areas, including the Bering Sea, the eastern Beaufort Sea, parts of the Canadian Archipelago, and the Greenland Sea. Interpretive techniques are discussed, and the scales and types of ice features that can be detected are described. For the Bering Sea, a sample of ERTS imagery is compared with visual ice reports and aerial photography from the NASA CV-990 aircraft.

  18. Evaluation of ERTS-1 imagery in mapping and managing soil and range resources in the Sand Hills Region of Nebraska

    NASA Technical Reports Server (NTRS)

    Seevers, P. M.; Drew, J. V.

    1973-01-01

    Interpretations of high altitude photography of test sites in the Sandhills of Nebraska permitted identification of subirrigated range sites as well as complexes of choppy sands and sands range sites, units composing approximately 85% of the Sandhills rangeland. These range sites form the basic units necessary for the interpretation of range condition classes used in grazing management. Analysis of ERTS-1 imagery acquired during August, September and October, 1972 indicated potential for the identification of gross differences in forage density within given range sites identified on early season aerial photography.

  19. Detection of unmanned aerial vehicles using a visible camera system.

    PubMed

    Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C

    2017-01-20

    Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.

  20. USGS QA Plan: Certification of digital airborne mapping products

    USGS Publications Warehouse

    Christopherson, J.

    2007-01-01

    To facilitate acceptance of new digital technologies in aerial imaging and mapping, the US Geological Survey (USGS) and its partners have launched a Quality Assurance (QA) Plan for Digital Aerial Imagery. This should provide a foundation for the quality of digital aerial imagery and products. It introduces broader considerations regarding processes employed by aerial flyers in collecting, processing and delivering data, and provides training and information for US producers and users alike.

  1. Longest time series of glacier mass changes in the Himalaya based on stereo imagery

    NASA Astrophysics Data System (ADS)

    Bolch, T.; Pieczonka, T.; Benn, D. I.

    2010-12-01

    Mass loss of Himalayan glaciers has wide-ranging consequences such as declining water resources, sea level rise and an increasing risk of glacial lake outburst floods (GLOFs). The assessment of the regional and global impact of glacier changes in the Himalaya is, however, hampered by a lack of mass balance data for most of the range. Multi-temporal digital terrain models (DTMs) allow glacier mass balance to be calculated since the availability of stereo imagery. Here we present the longest time series of mass changes in the Himalaya and show the high value of early stereo spy imagery such as Corona (years 1962 and 1970) aerial images and recent high resolution satellite data (Cartosat-1) to calculate a time series of glacier changes south of Mt. Everest, Nepal. We reveal that the glaciers are significantly losing mass with an increasing rate since at least ~1970, despite thick debris cover. The specific mass loss is 0.32 ± 0.08 m w.e. a-1, however, not higher than the global average. The spatial patterns of surface lowering can be explained by variations in debris-cover thickness, glacier velocity, and ice melt due to exposed ice cliffs and ponds.

  2. Photorealistic scene presentation: virtual video camera

    NASA Astrophysics Data System (ADS)

    Johnson, Michael J.; Rogers, Joel Clark W.

    1994-07-01

    This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.

  3. 7 CFR 1755.506 - Aerial wire services

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Aerial wire services 1755.506 Section 1755.506... § 1755.506 Aerial wire services (a) Aerial services of one through six pairs shall consist of Service...), Specifications and Drawings for Service Installations at Customer Access Locations. The wire used for aerial...

  4. Mapping Surface Temperatures on a Debris-Covered Glacier with an Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Kraaijenbrink, Philip D. A.; Shea, Joseph M.; Litt, Maxime; Steiner, Jakob F.; Treichler, Désirée; Koch, Inka; Immerzeel, Walter W.

    2018-05-01

    A mantel of debris cover often accumulates across the surface of glaciers in active mountain ranges with exceptionally steep terrain, such as the Andes, Himalaya and New Zealand Alps. Such a supraglacial debris layer has a major influence on a glacier's surface energy budget, enhancing radiation absorption and melt when the layer is thin, but insulating the ice when thicker than a few cm. Information on spatially distributed debris surface temperature has the potential to provide insight into the properties of the debris, its effects on the ice below and its influence on the near-surface boundary layer. Here, we deploy an unmanned aerial vehicle (UAV) equipped with a thermal infrared sensor on three separate missions over one day to map changing surface temperatures across the debris-covered Lirung Glacier in the Central Himalaya. We present a methodology to georeference and process the acquired thermal imagery, and correct for emissivity and sensor bias. Derived UAV surface temperatures are compared with distributed simultaneous in situ temperature measurements as well as with Landsat 8 thermal satellite imagery. Results show that the UAV-derived surface temperatures vary greatly both spatially and temporally, with -1.4±1.8, 11.0 ±5.2 and 15.3±4.7 °C for the three flights (mean±sd), respectively. The range in surface temperatures over the glacier during the morning is very large with almost 50 °C. Ground-based measurements are generally in agreement with the UAV imagery, but considerable deviations are present that are likely due to differences in measurement technique and approach, and validation is difficult as a result. The difference in spatial and temporal variability captured by the UAV as compared with much coarser satellite imagery is striking and it shows that satellite derived temperature maps should be interpreted with care. We conclude that UAVs provide a suitable means to acquire surface temperature maps of debris-covered glacier surfaces at

  5. Use of remote sensing techniques for geological hazard surveys in vegetated urban regions. [multispectral imagery for lithological mapping

    NASA Technical Reports Server (NTRS)

    Stow, S. H.; Price, R. C.; Hoehner, F.; Wielchowsky, C.

    1976-01-01

    The feasibility of using aerial photography for lithologic differentiation in a heavily vegetated region is investigated using multispectral imagery obtained from LANDSAT satellite and aircraft-borne photography. Delineating and mapping of localized vegetal zones can be accomplished by the use of remote sensing because a difference in morphology and physiology results in different natural reflectances or signatures. An investigation was made to show that these local plant zones are affected by altitude, topography, weathering, and gullying; but are controlled by lithology. Therefore, maps outlining local plant zones were used as a basis for lithologic map construction.

  6. Physical properties of shallow landslides and their role in landscape evolution investigated with ultrahigh-resolution lidar data and aerial imagery

    NASA Astrophysics Data System (ADS)

    Nelson, M. D.; Bryk, A. B.; Fauria, K.; Huang, M. H.; Dietrich, W. E.

    2017-12-01

    Shallow landslides are often a primary method of sediment transport and a dominant process of hillslope evolution in steep, soil-mantled landscapes. However, detailed studies of single landslides can be difficult to generalize across a landscape and watershed-scale analyses using coarse-resolution digital elevation models often fail to capture the detail necessary to understand the mechanics of individual slides. During February 2017, an intense rainfall event generated over 400 shallow landslides within a 13 km2 field site in Colusa County, Northern California, providing a unique opportunity to investigate how landsliding affects landscape morphology at multiple scales. The hilly grass and oak woodland site is underlain by Great Valley Sequence shale, sandstone, and conglomerate turbidites uniformly dipping 50° east, with ridgelines and valleys following bedding orientation. Here we present results from ultrahigh-resolution ( 100 points per square meter) airborne lidar data and aerial imagery collected directly after the event, as well as high-resolution airborne lidar data collected in 2015 and preliminary findings from field surveys. Of the 136 landslides surveyed so far, the failure surface was at the soil-weathered bedrock boundary in 85%. Only 69% of the landslides traveled down hillslopes and reached active channels, and of these, 37% transformed into debris flows that scoured channel pathways to bedrock. These small landslides have a median width of 3.2 m and average failure depth of 0.4 m. Landslides occurred at a median pre-failure ground surface slope of 35°, and only 56% occurred in convergent or weakly convergent areas. This comprehensive before and after dataset is being used as a rigorous test of shallow landslide models that predict landslide size and location, as well as a lens to investigate patterns in slope stability or failure with across the landscape. After multiple years of fieldwork at this study site where small landslide scars suggested

  7. Aerial thermography for energy efficiency of buildings: the ChoT project

    NASA Astrophysics Data System (ADS)

    Mandanici, Emanuele; Conte, Paolo

    2016-10-01

    The ChoT project aims at analysing the potential of aerial thermal imagery to produce large scale datasets for energetic efficiency analyses and policies in urban environments. It is funded by the Italian Ministry of Education, University and Research (MIUR) in the framework of the SIR 2014 (Scientific Independence of young Researchers) programme. The city of Bologna (Italy) was chosen as the case study. The acquisition of thermal infrared images at different times by multiple aerial flights is one of the main tasks of the project. The present paper provides an overview of the ChoT project, but it delves into some specific aspects of the data processing chain: the computing of the radiometric quantities of the atmosphere, the estimation of surface emissivity (through an object-oriented classification applied on a very high resolution multispectral image, to distinguish among the major roofing materials) and sky-view factor (by means of a digital surface model). To collect ground truth data, the surface temperature of roofs and road pavings was measured at several locations at the same time as the aircraft acquired the thermal images. Furthermore, the emissivity of some roofing materials was estimated by means of a thermal camera and a contact probe. All the surveys were georeferenced by GPS. The results of the first surveying campaign demonstrate the high sensitivity of the model to the variability of the surface emissivity and the atmospheric parameters.

  8. Games people play: How video games improve probabilistic learning.

    PubMed

    Schenk, Sabrina; Lech, Robert K; Suchan, Boris

    2017-09-29

    Recent research suggests that video game playing is associated with many cognitive benefits. However, little is known about the neural mechanisms mediating such effects, especially with regard to probabilistic categorization learning, which is a widely unexplored area in gaming research. Therefore, the present study aimed to investigate the neural correlates of probabilistic classification learning in video gamers in comparison to non-gamers. Subjects were scanned in a 3T magnetic resonance imaging (MRI) scanner while performing a modified version of the weather prediction task. Behavioral data yielded evidence for better categorization performance of video gamers, particularly under conditions characterized by stronger uncertainty. Furthermore, a post-experimental questionnaire showed that video gamers had acquired higher declarative knowledge about the card combinations and the related weather outcomes. Functional imaging data revealed for video gamers stronger activation clusters in the hippocampus, the precuneus, the cingulate gyrus and the middle temporal gyrus as well as in occipital visual areas and in areas related to attentional processes. All these areas are connected with each other and represent critical nodes for semantic memory, visual imagery and cognitive control. Apart from this, and in line with previous studies, both groups showed activation in brain areas that are related to attention and executive functions as well as in the basal ganglia and in memory-associated regions of the medial temporal lobe. These results suggest that playing video games might enhance the usage of declarative knowledge as well as hippocampal involvement and enhances overall learning performance during probabilistic learning. In contrast to non-gamers, video gamers showed better categorization performance, independently of the uncertainty of the condition. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Evaluate ERTS imagery for mapping and detection of changes of snowcover on land and on glaciers

    NASA Technical Reports Server (NTRS)

    Meier, M. F. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The percentage of snow cover area on specific drainage basins was measured from ERTS-1 imagery by video density slicing with a repeatability of 4 percent of the snow covered area. Data from ERTS-1 images of the melt season snow cover in the Thunder Creek drainage basin in the North Cascades were combined with existing hydrologic and meteorologic observations to enable calculations of the time distribution of the water stored in this mountain snowpack. Similar data could be used for frequent updating of expected inflow to reservoirs. Equivalent snowline altitudes were determined from area measurements. Snowline altitudes were also determined by combining enlarged ERTS-1 images with maps. ERTS-1 imagery was also successfully used to measure glacier accumulation area ratios for a small test basin.

  10. Is Tickling Torture? Assessing Welfare towards Slow Lorises (Nycticebus spp.) within Web 2.0 Videos.

    PubMed

    Nekaris, K Anne I; Musing, Louisa; Vazquez, Asier Gil; Donati, Giuseppe

    2015-01-01

    Videos, memes and images of pet slow lorises have become increasingly popular on the Internet. Although some video sites allow viewers to tag material as 'animal cruelty', no site has yet acknowledged the presence of cruelty in slow loris videos. We examined 100 online videos to assess whether they violated the 'five freedoms' of animal welfare and whether presence or absence of these conditions contributed to the number of thumbs up and views received by the videos. We found that all 100 videos showed at least 1 condition known as negative for lorises, indicating absence of the necessary freedom; 4% showed only 1 condition, but in nearly one third (31.3%) all 5 chosen criteria were present, including human contact (57%), daylight (87%), signs of stress/ill health (53%), unnatural environment (91%) and isolation from conspecifics (77%). The public were more likely to like videos where a slow loris was kept in the light or displayed signs of stress. Recent work on primates has shown that imagery of primates in a human context can cause viewers to perceive them as less threatened. Prevalence of a positive public opinion of such videos is a real threat towards awareness of the conservation crisis faced by slow lorises. © 2016 S. Karger AG, Basel.

  11. Drogue detection for vision-based autonomous aerial refueling via low rank and sparse decomposition with multiple features

    NASA Astrophysics Data System (ADS)

    Gao, Shibo; Cheng, Yongmei; Song, Chunhua

    2013-09-01

    The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.

  12. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and

  13. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation

    PubMed Central

    Gonzalez, Luis F.; Montes, Glen A.; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J.

    2016-01-01

    Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification. PMID:26784196

  14. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation.

    PubMed

    Gonzalez, Luis F; Montes, Glen A; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J

    2016-01-14

    Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.

  15. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  16. Four years of UAS Imagery Reveals Vegetation Change Due to Permafrost Thaw

    NASA Astrophysics Data System (ADS)

    DelGreco, J. L.; Herrick, C.; Varner, R. K.; McArthur, K. J.; McCalley, C. K.; Garnello, A.; Finnell, D.; Anderson, S. M.; Crill, P. M.; Palace, M. W.

    2017-12-01

    Warming trends in sub-arctic regions have resulted in thawing of permafrost which in turn induces change in vegetation across peatlands. Collapse of palsas (i.e. permafrost plateaus) has also been correlated to increases in methane (CH4) emissions to the atmosphere. Vegetation change provides new microenvironments that promote CH4 production and emission, specifically through plant interactions and structure. By quantifying the changes in vegetation at the landscape scale, we will be able to understand the impact of thaw on CH4 emissions in these complex and climate sensitive northern ecosystems. We combine field-based measurements of vegetation composition and high resolution Unmanned Aerial Systems (UAS) imagery to characterize vegetation change in a sub-arctic mire. At Stordalen Mire (1 km x 0.5 km), Abisko, Sweden, we flew a fixed-wing UAS in July of each year between 2014 and 2017. High precision GPS ground control points were used to georeference the imagery. Seventy-five randomized square-meter plots were measured for vegetation composition and individually classified into one of five cover types, each representing a different stage of permafrost degradation. With this training data, each year of imagery was classified by cover type. The developed cover type maps were also used to estimate CH4 emissions across the mire based on average flux CH4 rates from each cover type obtained from flux chamber measurements collected at the mire. This four year comparison of vegetation cover and methane emissions has indicated a rapid response to permafrost thaw and changes in emissions. Estimation of vegetation cover types is vital in our understanding of the evolution of northern peatlands and its future role in the global carbon cycle.

  17. Detection of rice sheath blight using an unmanned aerial system with high-resolution color and multispectral imaging.

    PubMed

    Zhang, Dongyan; Zhou, Xingen; Zhang, Jian; Lan, Yubin; Xu, Chao; Liang, Dong

    2018-01-01

    Detection and monitoring are the first essential step for effective management of sheath blight (ShB), a major disease in rice worldwide. Unmanned aerial systems have a high potential of being utilized to improve this detection process since they can reduce the time needed for scouting for the disease at a field scale, and are affordable and user-friendly in operation. In this study, a commercialized quadrotor unmanned aerial vehicle (UAV), equipped with digital and multispectral cameras, was used to capture imagery data of research plots with 67 rice cultivars and elite lines. Collected imagery data were then processed and analyzed to characterize the development of ShB and quantify different levels of the disease in the field. Through color features extraction and color space transformation of images, it was found that the color transformation could qualitatively detect the infected areas of ShB in the field plots. However, it was less effective to detect different levels of the disease. Five vegetation indices were then calculated from the multispectral images, and ground truths of disease severity and GreenSeeker measured NDVI (Normalized Difference Vegetation Index) were collected. The results of relationship analyses indicate that there was a strong correlation between ground-measured NDVIs and image-extracted NDVIs with the R2 of 0.907 and the root mean square error (RMSE) of 0.0854, and a good correlation between image-extracted NDVIs and disease severity with the R2 of 0.627 and the RMSE of 0.0852. Use of image-based NDVIs extracted from multispectral images could quantify different levels of ShB in the field plots with an accuracy of 63%. These results demonstrate that a customer-grade UAV integrated with digital and multispectral cameras can be an effective tool to detect the ShB disease at a field scale.

  18. Characterisation of recently retrieved aerial photographs of Ethiopia (1935-1941) and their fusion with current remotely sensed imagery for retrospective geomorphological analysis

    NASA Astrophysics Data System (ADS)

    Nyssen, Jan; Gebremeskel, Gezahegne; Mohamed, Sultan; Petrie, Gordon; Seghers, Valérie; Meles Hadgu, Kiros; De Maeyer, Philippe; Haile, Mitiku; Frankl, Amaury

    2013-04-01

    8281 assemblages of aerial photographs (APs) acquired by the 7a Sezione Topocartografica during the Italian occupation of Ethiopia (1935-1941) have recently been discovered, scanned and organised. The oldest APs of the country that are known so far were taken in the period 1958-1964. The APs of the 1930s were analysed for their technical characteristics, scale, flight lines, coverage, use in topographic mapping, and potential future uses. The APs over Ethiopia in 1935-1941 are presented as assemblages on approx. 50 cm x 20 cm cardboard tiles, each holding a label, one nadir-pointing photograph flanked by two low-oblique photographs and one high-oblique photograph. The four APs were exposed simultaneously and were taken across the flight line; the high-oblique photograph is presented alternatively at left and at right; there is approx. 60% overlap between subsequent sets of APs. One of Santoni's glass plate multi-cameras was used, with focal length of 178 mm, flight height at 4000-4500 m a.s.l., which results in an approximate scale of 1:11 500 for the central photograph and 1:16 000 to 1:18 000 for the low-oblique APs. The surveyors oriented themselves with maps of Ethiopia at 1:400 000 scale, compiled in 1934. The flights present a dense AP coverage of Northern Ethiopia, where they were acquired in the context of upcoming battles with the Ethiopian army. Several flights preceded the later advance of the Italian army southwards towards the capital Addis Ababa. Further flights took place in central Ethiopia for civilian purposes. As of 1936, the APs were used to prepare highly detailed topographic maps at 1:100 000 scale. These APs (1935-1941) together with APs of 1958-1964, 1994 and recent high-resolution satellite imagery are currently being used in spatially explicit change studies of land cover, land management and (hydro)geomorphology in Ethiopia over a time span of almost 80 years, the first results of which will be presented.

  19. A Standardised Vocabulary for Identifying Benthic Biota and Substrata from Underwater Imagery: The CATAMI Classification Scheme

    PubMed Central

    Jordan, Alan; Rees, Tony; Gowlett-Holmes, Karen

    2015-01-01

    Imagery collected by still and video cameras is an increasingly important tool for minimal impact, repeatable observations in the marine environment. Data generated from imagery includes identification, annotation and quantification of biological subjects and environmental features within an image. To be long-lived and useful beyond their project-specific initial purpose, and to maximize their utility across studies and disciplines, marine imagery data should use a standardised vocabulary of defined terms. This would enable the compilation of regional, national and/or global data sets from multiple sources, contributing to broad-scale management studies and development of automated annotation algorithms. The classification scheme developed under the Collaborative and Automated Tools for Analysis of Marine Imagery (CATAMI) project provides such a vocabulary. The CATAMI classification scheme introduces Australian-wide acknowledged, standardised terminology for annotating benthic substrates and biota in marine imagery. It combines coarse-level taxonomy and morphology, and is a flexible, hierarchical classification that bridges the gap between habitat/biotope characterisation and taxonomy, acknowledging limitations when describing biological taxa through imagery. It is fully described, documented, and maintained through curated online databases, and can be applied across benthic image collection methods, annotation platforms and scoring methods. Following release in 2013, the CATAMI classification scheme was taken up by a wide variety of users, including government, academia and industry. This rapid acceptance highlights the scheme’s utility and the potential to facilitate broad-scale multidisciplinary studies of marine ecosystems when applied globally. Here we present the CATAMI classification scheme, describe its conception and features, and discuss its utility and the opportunities as well as challenges arising from its use. PMID:26509918

  20. Chromotomosynthesis for high speed hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Bostick, Randall L.; Perram, Glen P.

    2012-09-01

    A rotating direct vision prism, chromotomosynthetic imaging (CTI) system operating in the visible creates hyperspectral imagery by collecting a set of 2D images with each spectrally projected at a different rotation angle of the prism. Mathematical reconstruction techniques that have been well tested in the field of medical physics are used to reconstruct the data to produce the 3D hyperspectral image. The instrument operates with a 100 mm focusing lens in the spectral range of 400-900 nm with a field of view of 71.6 mrad and angular resolution of 0.8-1.6 μrad. The spectral resolution is 0.6 nm at the shortest wavelengths, degrading to over 10 nm at the longest wavelengths. Measurements using a pointlike target show that performance is limited by chromatic aberration. The accuracy and utility of the instrument is assessed by comparing the CTI results to spatial data collected by a wideband image and hyperspectral data collected using a liquid crystal tunable filter (LCTF). The wide-band spatial content of the scene reconstructed from the CTI data is of same or better quality as a single frame collected by the undispersed imaging system with projections taken at every 1°. Performance is dependent on the number of projections used, with projections at 5° producing adequate results in terms of target characterization. The data collected by the CTI system can provide spatial information of equal quality as a comparable imaging system, provide high-frame rate slitless 1-D spectra, and generate 3-D hyperspectral imagery which can be exploited to provide the same results as a traditional multi-band spectral imaging system. While this prototype does not operate at high speeds, components exist which will allow for CTI systems to generate hyperspectral video imagery at rates greater than 100 Hz. The instrument has considerable potential for characterizing bomb detonations, muzzle flashes, and other battlefield combustion events.

  1. Monitoring the changing position of coastlines using aerial and satellite image data: an example from the eastern coast of Trabzon, Turkey.

    PubMed

    Sesli, Faik Ahmet; Karsli, Fevzi; Colkesen, Ismail; Akyol, Nihat

    2009-06-01

    Coastline mapping and coastline change detection are critical issues for safe navigation, coastal resource management, coastal environmental protection, and sustainable coastal development and planning. Changes in the shape of coastline may fundamentally affect the environment of the coastal zone. This may be caused by natural processes and/or human activities. Over the past 30 years, the coastal sites in Turkey have been under an intensive restraint associated with a population press due to the internal and external touristic demand. In addition, urbanization on the filled up areas, settlements, and the highways constructed to overcome the traffic problems and the other applications in the coastal region clearly confirm an intensive restraint. Aerial photos with medium spatial resolution and high resolution satellite imagery are ideal data sources for mapping coastal land use and monitoring their changes for a large area. This study introduces an efficient method to monitor coastline and coastal land use changes using time series aerial photos (1973 and 2002) and satellite imagery (2005) covering the same geographical area. Results show the effectiveness of the use of digital photogrammetry and remote sensing data on monitoring large area of coastal land use status. This study also showed that over 161 ha areas were filled up in the research area and along the coastal land 12.2 ha of coastal erosion is determined for the period of 1973 to 2005. Consequently, monitoring of coastal land use is thus necessary for coastal area planning in order to protecting the coastal areas from climate changes and other coastal processes.

  2. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  3. Large scale track analysis for wide area motion imagery surveillance

    NASA Astrophysics Data System (ADS)

    van Leeuwen, C. J.; van Huis, J. R.; Baan, J.

    2016-10-01

    Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their

  4. Efficient Feature Extraction and Likelihood Fusion for Vehicle Tracking in Low Frame Rate Airborne Video

    DTIC Science & Technology

    2010-07-01

    imagery, persistent sensor array I. Introduction New device fabrication technologies and heterogeneous embedded processors have led to the emergence of a...geometric occlusions between target and sensor , motion blur, urban scene complexity, and high data volumes. In practical terms the targets are small...distributed airborne narrow-field-of-view video sensor networks. Airborne camera arrays combined with com- putational photography techniques enable the

  5. High resolution multispectral photogrammetric imagery: enhancement, interpretation and evaluations

    NASA Astrophysics Data System (ADS)

    Roberts, Arthur; Haefele, Martin; Bostater, Charles; Becker, Thomas

    2007-10-01

    A variety of aerial mapping cameras were adapted and developed into simulated multiband digital photogrammetric mapping systems. Direct digital multispectral, two multiband cameras (IIS 4 band and Itek 9 band) and paired mapping and reconnaissance cameras were evaluated for digital spectral performance and photogrammetric mapping accuracy in an aquatic environment. Aerial films (24cm X 24cm format) tested were: Agfa color negative and extended red (visible and near infrared) panchromatic, and; Kodak color infrared and B&W (visible and near infrared) infrared. All films were negative processed to published standards and digitally converted at either 16 (color) or 10 (B&W) microns. Excellent precision in the digital conversions was obtained with scanning errors of less than one micron. Radiometric data conversion was undertaken using linear density conversion and centered 8 bit histogram exposure. This resulted in multiple 8 bit spectral image bands that were unaltered (not radiometrically enhanced) "optical count" conversions of film density. This provided the best film density conversion to a digital product while retaining the original film density characteristics. Data covering water depth, water quality, surface roughness, and bottom substrate were acquired using different measurement techniques as well as different techniques to locate sampling points on the imagery. Despite extensive efforts to obtain accurate ground truth data location errors, measurement errors, and variations in the correlation between water depth and remotely sensed signal persisted. These errors must be considered endemic and may not be removed through even the most elaborate sampling set up. Results indicate that multispectral photogrammetric systems offer improved feature mapping capability.

  6. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    NASA Astrophysics Data System (ADS)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  7. Using high-resolution satellite imagery to assess populations of animals in the Antarctic

    NASA Astrophysics Data System (ADS)

    LaRue, Michelle Ann

    The Southern Ocean is one of the most rapidly-changing ecosystems on the planet due to the effects of climate change and commercial fishing for ecologically-important krill and fish. It is imperative that populations of indicator species, such as penguins and seals, be monitored at regional- to global scales to decouple the effects of climate and anthropogenic changes for appropriate ecosystem-based management of the Southern Ocean. Remotely monitoring populations through high-resolution satellite imagery is currently the only feasible way to gain information about population trends of penguins and seals in Antarctica. In my first chapter, I review the literature where high-resolution satellite imagery has been used to assess populations of animals in polar regions. Building on this literature, my second chapter focuses on estimating changes in abundance in the Weddell seal population in Erebus Bay. I found a strong correlation between ground and satellite counts, and this finding provides an alternate method for assessing populations of Weddell seals in areas where less is known about population status. My third chapter explores how size of the guano stain of Adelie penguins can be used to predict population size. Using high-resolution imagery and ground counts, I built a model to estimate the breeding population of Adelie penguins using a supervised classification to estimate guano size. These results suggest that the size of guano stain is an accurate predictor of population size, and can be applied to estimate remote Adelie penguin colonies. In my fourth chapter, I use air photos, satellite imagery, climate and mark-resight data to determine that climate change has positively impacted the population of Adelie penguins at Beaufort Island through a habitat release that ultimately affected the dynamics within the southern Ross Sea metapopulation. Finally, for my fifth chapter I combined the literature with observations from aerial surveys and satellite imagery to

  8. Integration of aerial imaging and variable-rate technology for site-specific aerial herbicide application

    USDA-ARS?s Scientific Manuscript database

    As remote sensing and variable rate technology are becoming more available for aerial applicators, practical methodologies on effective integration of these technologies are needed for site-specific aerial applications of crop production and protection materials. The objectives of this study were to...

  9. The Intersection of Imagery Ability, Imagery Use, and Learning Style: An Exploratory Study

    ERIC Educational Resources Information Center

    Bolles, Gina; Chatfield, Steven J.

    2009-01-01

    This study explores the intersection of the individual's imagery ability, imagery use in dance training and performance, and learning style. Thirty-four intermediate-level ballet and modern dance students at the University of Oregon completed the Movement Imagery Questionnaire-Revised (MIQ-R) and Kolb's Learning Style Inventory-3 (LSI-3). The four…

  10. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Efficient mapping from unmanned aerial platforms cannot rely on aerial triangulation using known ground control points. The cost and time of setting ground control, added to the need for increased overlap between flight lines, severely limits the ability of small VTOL platforms, in particular, to handle mapping-grade missions of all but the very smallest survey areas. Applanix has brought its experience in manned photogrammetry applications to this challenge, setting out the requirements for increasing the efficiency of mapping operations from small UAVs, using survey-grade GNSS-Inertial technology to accomplish direct georeferencing of the platform and/or the imaging payload. The Direct Mapping Solution for Unmanned Aerial Vehicles (DMS-UAV) is a complete and ready-to-integrate OEM solution for Direct Georeferencing (DG) on unmanned aerial platforms. Designed as a solution for systems integrators to create mapping payloads for UAVs of all types and sizes, the DMS produces directly georeferenced products for any imaging payload (visual, LiDAR, infrared, multispectral imaging, even video). Additionally, DMS addresses the airframe's requirements for high-accuracy position and orientation for such tasks as precision RTK landing and Precision Orientation for Air Data Systems (ADS), Guidance and Control. This paper presents results using a DMS comprised of an Applanix APX-15 UAV with a Sony a7R camera to produce highly accurate orthorectified imagery without Ground Control Points on a Microdrones md4-1000 platform conducted by Applanix and Avyon. APX-15 UAV is a single-board, small-form-factor GNSS-Inertial system designed for use on small, lightweight platforms. The Sony a7R is a prosumer digital RGB camera sensor, with a 36MP, 4.9-micron CCD producing images at 7360 columns by 4912 rows. It was configured with a 50mm AF-S Nikkor f/1.8 lens and subsequently with a 35mm Zeiss Sonnar T* FE F2.8 lens. Both the camera/lens combinations and the APX-15 were mounted to a

  11. Reinforcement Learning with Autonomous Small Unmanned Aerial Vehicles in Cluttered Environments

    NASA Technical Reports Server (NTRS)

    Tran, Loc; Cross, Charles; Montague, Gilbert; Motter, Mark; Neilan, James; Qualls, Garry; Rothhaar, Paul; Trujillo, Anna; Allen, B. Danette

    2015-01-01

    We present ongoing work in the Autonomy Incubator at NASA Langley Research Center (LaRC) exploring the efficacy of a data set aggregation approach to reinforcement learning for small unmanned aerial vehicle (sUAV) flight in dense and cluttered environments with reactive obstacle avoidance. The goal is to learn an autonomous flight model using training experiences from a human piloting a sUAV around static obstacles. The training approach uses video data from a forward-facing camera that records the human pilot's flight. Various computer vision based features are extracted from the video relating to edge and gradient information. The recorded human-controlled inputs are used to train an autonomous control model that correlates the extracted feature vector to a yaw command. As part of the reinforcement learning approach, the autonomous control model is iteratively updated with feedback from a human agent who corrects undesired model output. This data driven approach to autonomous obstacle avoidance is explored for simulated forest environments furthering autonomous flight under the tree canopy research. This enables flight in previously inaccessible environments which are of interest to NASA researchers in Earth and Atmospheric sciences.

  12. Kinesthetic imagery of musical performance

    PubMed Central

    Lotze, Martin

    2013-01-01

    Musicians use different kinds of imagery. This review focuses on kinesthetic imagery, which has been shown to be an effective complement to actively playing an instrument. However, experience in actual movement performance seems to be a requirement for a recruitment of those brain areas representing movement ideation during imagery. An internal model of movement performance might be more differentiated when training has been more intense or simply performed more often. Therefore, with respect to kinesthetic imagery, these strategies are predominantly found in professional musicians. There are a few possible reasons as to why kinesthetic imagery is used in addition to active training; one example is the need for mental rehearsal of the technically most difficult passages. Another reason for mental practice is that mental rehearsal of the piece helps to improve performance if the instrument is not available for actual training as is the case for professional musicians when they are traveling to various appearances. Overall, mental imagery in musicians is not necessarily specific to motor, somatosensory, auditory, or visual aspects of imagery, but integrates them all. In particular, the audiomotor loop is highly important, since auditory aspects are crucial for guiding motor performance. All these aspects result in a distinctive representation map for the mental imagery of musical performance. This review summarizes behavioral data, and findings from functional brain imaging studies of mental imagery of musical performance. PMID:23781196

  13. Kinesthetic imagery of musical performance.

    PubMed

    Lotze, Martin

    2013-01-01

    Musicians use different kinds of imagery. This review focuses on kinesthetic imagery, which has been shown to be an effective complement to actively playing an instrument. However, experience in actual movement performance seems to be a requirement for a recruitment of those brain areas representing movement ideation during imagery. An internal model of movement performance might be more differentiated when training has been more intense or simply performed more often. Therefore, with respect to kinesthetic imagery, these strategies are predominantly found in professional musicians. There are a few possible reasons as to why kinesthetic imagery is used in addition to active training; one example is the need for mental rehearsal of the technically most difficult passages. Another reason for mental practice is that mental rehearsal of the piece helps to improve performance if the instrument is not available for actual training as is the case for professional musicians when they are traveling to various appearances. Overall, mental imagery in musicians is not necessarily specific to motor, somatosensory, auditory, or visual aspects of imagery, but integrates them all. In particular, the audiomotor loop is highly important, since auditory aspects are crucial for guiding motor performance. All these aspects result in a distinctive representation map for the mental imagery of musical performance. This review summarizes behavioral data, and findings from functional brain imaging studies of mental imagery of musical performance.

  14. The Art of Aerial Warfare

    DTIC Science & Technology

    2005-03-01

    14 3 THE POLITICAL DIMENSIONS OF AERIAL WARFARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 How Political Effects in...Aerial Warfare . . . . . . Outweigh Military Effects . . . . . . . . . . . . . . . 19 Political Targets Versus Military Targets . . . . . 22...34 4 MILITARY AND POLITICAL EFFECTS OF STRATEGIC ATTACK . . . . . . . . . . . . . . . . . . 35 The Premise of

  15. Study of time-lapse processing for dynamic hydrologic conditions. [electronic satellite image analysis console for Earth Resources Technology Satellites imagery

    NASA Technical Reports Server (NTRS)

    Serebreny, S. M.; Evans, W. E.; Wiegman, E. J.

    1974-01-01

    The usefulness of dynamic display techniques in exploiting the repetitive nature of ERTS imagery was investigated. A specially designed Electronic Satellite Image Analysis Console (ESIAC) was developed and employed to process data for seven ERTS principal investigators studying dynamic hydrological conditions for diverse applications. These applications include measurement of snowfield extent and sediment plumes from estuary discharge, Playa Lake inventory, and monitoring of phreatophyte and other vegetation changes. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. The most unique feature of the system is the capability to time lapse the imagery and analytic displays of the imagery. Data products included quantitative measurements of distances and areas, binary thematic maps based on monospectral or multispectral decisions, radiance profiles, and movie loops. Applications of animation for uses other than creating time-lapse sequences are identified. Input to the ESIAC can be either digital or via photographic transparencies.

  16. Color Helmet Mounted Display System with Real Time Computer Generated and Video Imagery for In-Flight Simulation

    NASA Technical Reports Server (NTRS)

    Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).

  17. 47 CFR 32.6431 - Aerial wire expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Aerial wire expense. 32.6431 Section 32.6431... FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6431 Aerial wire expense. This account shall include expenses associated with aerial wire. ...

  18. 47 CFR 32.6431 - Aerial wire expense.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Aerial wire expense. 32.6431 Section 32.6431... FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6431 Aerial wire expense. This account shall include expenses associated with aerial wire. ...

  19. Structural and Functional Connectivity from Unmanned-Aerial System Data

    NASA Astrophysics Data System (ADS)

    Masselink, Rens; Heckmann, Tobias; Casalí, Javier; Giménez, Rafael; Cerdá, Artemi; Keesstra, Saskia

    2017-04-01

    Over the past decade there has been an increase in both connectivity research and research involving Unmanned-Aerial systems (UASs). In some studies, UASs were successfully used for the assessment of connectivity, but not yet to their full potential. We present several ways to use data obtained from UASs to measure variables related to connectivity, and use these to assess both structural and functional connectivity. These assessments of connectivity can aid us in obtaining a better understanding of the dynamics of e.g. sediment and nutrient transport. We identify three sources of data obtained from a consumer camera mounted on a fixed-wing UAS, which can be used separately or combined: Visual and near-infrared imagery, point clouds, and digital elevation models (DEMs). Imagery (or: orthophotos) can be used for (automatic) mapping of connectivity features like rills, gullies and soil and water conservation measures using supervised or unsupervised classification methods with e.g. Object-Based Image Analysis. Furthermore, patterns of soil moisture in the top layers can be extracted from visual and near-infrared imagery. Point clouds can be analysed for vegetation height and density, and soil surface roughness. Lastly, DEMs can be used in combination with imagery for a number of tasks, including raster-based (e.g. DEM derivatives) and object-based (e.g., feature detection) analysis: Flow routing algorithms can be used to analyse potential pathways of surface runoff and sediment transport. This allows for the assessment of structural connectivity through indices that are based, for example, on morphometric and other properties of surfaces, contributing areas, and pathways. Third, erosion and deposition can be measured by calculating elevation changes from repeat surveys. From these "intermediate" variables like roughness, vegetation density and soil moisture, structural connectivity and functional connectivity can be assessed by combining them into a dynamic index of

  20. Aerial thermography for energy conservation

    NASA Technical Reports Server (NTRS)

    Jack, J. R.

    1978-01-01

    Thermal infrared scanning from an aircraft is a convenient and commercially available means for determining relative rates of energy loss from building roofs. The need to conserve energy as fuel costs makes the mass survey capability of aerial thermography an attractive adjunct to community energy awareness programs. Background information on principles of aerial thermography is presented. Thermal infrared scanning systems, flight and environmental requirements for data acquisition, preparation of thermographs for display, major users and suppliers of thermography, and suggested specifications for obtaining aerial scanning services were reviewed.