Floating aerial LED signage based on aerial imaging by retro-reflection (AIRR).
Yamamoto, Hirotsugu; Tomiyama, Yuka; Suyama, Shiro
2014-11-03
We propose a floating aerial LED signage technique by utilizing retro-reflection. The proposed display is composed of LEDs, a half mirror, and retro-reflective sheeting. Directivity of the aerial image formation and size of the aerial image have been investigated. Furthermore, a floating aerial LED sign has been successfully formed in free space.
1989-08-01
Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17
Research of aerial imaging spectrometer data acquisition technology based on USB 3.0
NASA Astrophysics Data System (ADS)
Huang, Junze; Wang, Yueming; He, Daogang; Yu, Yanan
2016-11-01
With the emergence of UAV (unmanned aerial vehicle) platform for aerial imaging spectrometer, research of aerial imaging spectrometer DAS(data acquisition system) faces new challenges. Due to the limitation of platform and other factors, the aerial imaging spectrometer DAS requires small-light, low-cost and universal. Traditional aerial imaging spectrometer DAS system is expensive, bulky, non-universal and unsupported plug-and-play based on PCIe. So that has been unable to meet promotion and application of the aerial imaging spectrometer. In order to solve these problems, the new data acquisition scheme bases on USB3.0 interface.USB3.0 can provide guarantee of small-light, low-cost and universal relying on the forward-looking technology advantage. USB3.0 transmission theory is up to 5Gbps.And the GPIF programming interface achieves 3.2Gbps of the effective theoretical data bandwidth.USB3.0 can fully meet the needs of the aerial imaging spectrometer data transmission rate. The scheme uses the slave FIFO asynchronous data transmission mode between FPGA and USB3014 interface chip. Firstly system collects spectral data from TLK2711 of high-speed serial interface chip. Then FPGA receives data in DDR2 cache after ping-pong data processing. Finally USB3014 interface chip transmits data via automatic-dma approach and uploads to PC by USB3.0 cable. During the manufacture of aerial imaging spectrometer, the DAS can achieve image acquisition, transmission, storage and display. All functions can provide the necessary test detection for aerial imaging spectrometer. The test shows that system performs stable and no data lose. Average transmission speed and storage speed of writing SSD can stabilize at 1.28Gbps. Consequently ,this data acquisition system can meet application requirements for aerial imaging spectrometer.
Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system
NASA Astrophysics Data System (ADS)
Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu
2018-09-01
A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.
NASA Astrophysics Data System (ADS)
Morita, Shogo; Ito, Shusei; Yamamoto, Hirotsugu
2017-02-01
Aerial display can form transparent floating screen in the mid-air and expected to provide aerial floating signage. We have proposed aerial imaging by retro-reflection (AIRR) to form a large aerial LED screen. However, luminance of aerial image is not sufficiently high so as to be used for signage under broad daylight. The purpose of this paper is to propose a novel aerial display scheme that features hybrid display of two different types of images. Under daylight, signs made of cubes are visible. At night, or under dark lighting situation, aerial LED signs become visible. Our proposed hybrid display is composed of an LED sign, a beam splitter, retro-reflectors, and transparent acrylic cubes. Aerial LED sign is formed with AIRR. Furthermore, we place transparent acrylic cubes on the beam splitter. Light from the LED sign enters transparent acrylic cubes, reflects twice in the transparent acrylic cubes, exit and converge to planesymmetrical position with light source regarding the cube array. Thus, transparent acrylic cubes also form the real image of the source LED sign. Now, we form a sign with the transparent acrylic cubes so that this cube-based sign is apparent under daylight. We have developed a proto-type display by use of 1-cm transparent cubes and retro-reflective sheeting and successfully confirmed aerial image forming with AIRR and transparent cubes as well as cube-based sign under daylight.
Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images
NASA Astrophysics Data System (ADS)
Liu, J.; Ji, S.; Zhang, C.; Qin, Z.
2018-05-01
Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.
Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods
NASA Astrophysics Data System (ADS)
Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.
2008-12-01
Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.
1988-01-19
approach for the analysis of aerial images. In this approach image analysis is performed ast three levels of abstraction, namely iconic or low-level... image analysis , symbolic or medium-level image analysis , and semantic or high-level image analysis . Domain dependent knowledge about prototypical urban
High Density Aerial Image Matching: State-Of and Future Prospects
NASA Astrophysics Data System (ADS)
Haala, N.; Cavegn, S.
2016-06-01
Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.
Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo; Erdmann, Andreas
2014-03-10
We propose an in situ aberration measurement technique based on an analytical linear model of through-focus aerial images. The aberrations are retrieved from aerial images of six isolated space patterns, which have the same width but different orientations. The imaging formulas of the space patterns are investigated and simplified, and then an analytical linear relationship between the aerial image intensity distributions and the Zernike coefficients is established. The linear relationship is composed of linear fitting matrices and rotation matrices, which can be calculated numerically in advance and utilized to retrieve Zernike coefficients. Numerical simulations using the lithography simulators PROLITH and Dr.LiTHO demonstrate that the proposed method can measure wavefront aberrations up to Z(37). Experiments on a real lithography tool confirm that our method can monitor lens aberration offset with an accuracy of 0.7 nm.
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia
2018-05-01
The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.
Melon yield prediction using small unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Zhao, Tiebiao; Wang, Zhongdao; Yang, Qi; Chen, YangQuan
2017-05-01
Thanks to the development of camera technologies, small unmanned aerial systems (sUAS), it is possible to collect aerial images of field with more flexible visit, higher resolution and much lower cost. Furthermore, the performance of objection detection based on deeply trained convolutional neural networks (CNNs) has been improved significantly. In this study, we applied these technologies in the melon production, where high-resolution aerial images were used to count melons in the field and predict the yield. CNN-based object detection framework-Faster R-CNN is applied in the melon classification. Our results showed that sUAS plus CNNs were able to detect melons accurately in the late harvest season.
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.
Zhong, Jiandan; Lei, Tao; Yao, Guangle
2017-11-24
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks
Zhong, Jiandan; Lei, Tao; Yao, Guangle
2017-01-01
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed. PMID:29186756
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Aerial image based die-to-model inspections of advanced technology masks
NASA Astrophysics Data System (ADS)
Kim, Jun; Lei, Wei-Guo; McCall, Joan; Zaatri, Suheil; Penn, Michael; Nagpal, Rajesh; Faivishevsky, Lev; Ben-Yishai, Michael; Danino, Udy; Tam, Aviram; Dassa, Oded; Balasubramanian, Vivek; Shah, Tejas H.; Wagner, Mark; Mangan, Shmoolik
2009-10-01
Die-to-Model (D2M) inspection is an innovative approach to running inspection based on a mask design layout data. The D2M concept takes inspection from the traditional domain of mask pattern to the preferred domain of the wafer aerial image. To achieve this, D2M transforms the mask layout database into a resist plane aerial image, which in turn is compared to the aerial image of the mask, captured by the inspection optics. D2M detection algorithms work similarly to an Aerial D2D (die-to-die) inspection, but instead of comparing a die to another die it is compared to the aerial image model. D2M is used whenever D2D inspection is not practical (e.g., single die) or when a validation of mask conformity to design is needed, i.e., for printed pattern fidelity. D2M is of particular importance for inspection of logic single die masks, where no simplifying assumption of pattern periodicity may be done. The application can tailor the sensitivity to meet the needs at different locations, such as device area, scribe lines and periphery. In this paper we present first test results of the D2M mask inspection application at a mask shop. We describe the methodology of using D2M, and review the practical aspects of the D2M mask inspection.
NASA Astrophysics Data System (ADS)
Turley, Anthony Allen
Many research projects require the use of aerial images. Wetlands evaluation, crop monitoring, wildfire management, environmental change detection, and forest inventory are but a few of the applications of aerial imagery. Low altitude Small Format Aerial Photography (SFAP) is a bridge between satellite and man-carrying aircraft image acquisition and ground-based photography. The author's project evaluates digital images acquired using low cost commercial digital cameras and standard model airplanes to determine their suitability for remote sensing applications. Images from two different sites were obtained. Several photo missions were flown over each site, acquiring images in the visible and near infrared electromagnetic bands. Images were sorted and analyzed to select those with the least distortion, and blended together with Microsoft Image Composite Editor. By selecting images taken within minutes apart, radiometric qualities of the images were virtually identical, yielding no blend lines in the composites. A commercial image stitching program, Autopano Pro, was purchased during the later stages of this study. Autopano Pro was often able to mosaic photos that the free Image Composite Editor was unable to combine. Using telemetry data from an onboard data logger, images were evaluated to calculate scale and spatial resolution. ERDAS ER Mapper and ESRI ArcGIS were used to rectify composite images. Despite the limitations inherent in consumer grade equipment, images of high spatial resolution were obtained. Mosaics of as many as 38 images were created, and the author was able to record detailed aerial images of forest and wetland areas where foot travel was impractical or impossible.
Automatic Sea Bird Detection from High Resolution Aerial Imagery
NASA Astrophysics Data System (ADS)
Mader, S.; Grenzdörffer, G. J.
2016-06-01
Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.
NASA Astrophysics Data System (ADS)
Shi, Yeyin; Thomasson, J. Alex; Yang, Chenghai; Cope, Dale; Sima, Chao
2017-05-01
Though sharing with many commonalities, one of the major differences between conventional high-altitude airborne remote sensing and low-altitude unmanned aerial system (UAS) based remote sensing is that the latter one has much smaller ground footprint for each image shot. To cover the same area on the ground, it requires the low-altitude UASbased platform to take many highly-overlapped images to produce a good mosaic, instead of just one or a few image shots by the high-altitude aerial platform. Such an UAS flight usually takes 10 to 30 minutes or even longer to complete; environmental lighting change during this time span cannot be ignored especially when spectral variations of various parts of a field are of interests. In this case study, we compared the visible reflectance of two aerial imagery - one generated from mosaicked UAS images, the other generated from a single image taken by a manned aircraft - over the same agricultural field to quantitatively evaluate their spectral variations caused by the different data acquisition strategies. Specifically, we (1) developed our customized ground calibration points (GCPs) and an associated radiometric calibration method for UAS data processing based on camera's sensitivity characteristics; (2) developed a basic comparison method for radiometrically calibrated data from the two aerial platforms based on regions of interests. We see this study as a starting point for a series of following studies to understand the environmental influence on UAS data and investigate the solutions to minimize such influence to ensure data quality.
Detection and clustering of features in aerial images by neuron network-based algorithm
NASA Astrophysics Data System (ADS)
Vozenilek, Vit
2015-12-01
The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.
NASA Astrophysics Data System (ADS)
Wu, Bo; Xie, Linfu; Hu, Han; Zhu, Qing; Yau, Eric
2018-05-01
Photorealistic three-dimensional (3D) models are fundamental to the spatial data infrastructure of a digital city, and have numerous potential applications in areas such as urban planning, urban management, urban monitoring, and urban environmental studies. Recent developments in aerial oblique photogrammetry based on aircraft or unmanned aerial vehicles (UAVs) offer promising techniques for 3D modeling. However, 3D models generated from aerial oblique imagery in urban areas with densely distributed high-rise buildings may show geometric defects and blurred textures, especially on building façades, due to problems such as occlusion and large camera tilt angles. Meanwhile, mobile mapping systems (MMSs) can capture terrestrial images of close-range objects from a complementary view on the ground at a high level of detail, but do not offer full coverage. The integration of aerial oblique imagery with terrestrial imagery offers promising opportunities to optimize 3D modeling in urban areas. This paper presents a novel method of integrating these two image types through automatic feature matching and combined bundle adjustment between them, and based on the integrated results to optimize the geometry and texture of the 3D models generated from aerial oblique imagery. Experimental analyses were conducted on two datasets of aerial and terrestrial images collected in Dortmund, Germany and in Hong Kong. The results indicate that the proposed approach effectively integrates images from the two platforms and thereby improves 3D modeling in urban areas.
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
USDA-ARS?s Scientific Manuscript database
Although conventional high-altitude airborne remote sensing and low-altitude unmanned aerial system (UAS) based remote sensing share many commonalities, one of the major differences between the two remote sensing platforms is that the latter has much smaller image footprint. To cover the same area o...
Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs
NASA Astrophysics Data System (ADS)
Chen, H. R.; Tseng, Y. H.
2016-06-01
Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
Automated aerial image based CD metrology initiated by pattern marking with photomask layout data
NASA Astrophysics Data System (ADS)
Davis, Grant; Choi, Sun Young; Jung, Eui Hee; Seyfarth, Arne; van Doornmalen, Hans; Poortinga, Eric
2007-05-01
The photomask is a critical element in the lithographic image transfer process from the drawn layout to the final structures on the wafer. The non-linearity of the imaging process and the related MEEF impose a tight control requirement on the photomask critical dimensions. Critical dimensions can be measured in aerial images with hardware emulation. This is a more recent complement to the standard scanning electron microscope measurement of wafers and photomasks. Aerial image measurement includes non-linear, 3-dimensional, and materials effects on imaging that cannot be observed directly by SEM measurement of the mask. Aerial image measurement excludes the processing effects of printing and etching on the wafer. This presents a unique contribution to the difficult process control and modeling tasks in mask making. In the past, aerial image measurements have been used mainly to characterize the printability of mask repair sites. Development of photomask CD characterization with the AIMS TM tool was motivated by the benefit of MEEF sensitivity and the shorter feedback loop compared to wafer exposures. This paper describes a new application that includes: an improved interface for the selection of meaningful locations using the photomask and design layout data with the Calibre TM Metrology Interface, an automated recipe generation process, an automated measurement process, and automated analysis and result reporting on a Carl Zeiss AIMS TM system.
Object-based land-cover classification for metropolitan Phoenix, Arizona, using aerial photography
NASA Astrophysics Data System (ADS)
Li, Xiaoxiao; Myint, Soe W.; Zhang, Yujia; Galletti, Chritopher; Zhang, Xiaoxiang; Turner, Billie L.
2014-12-01
Detailed land-cover mapping is essential for a range of research issues addressed by the sustainability and land system sciences and planning. This study uses an object-based approach to create a 1 m land-cover classification map of the expansive Phoenix metropolitan area through the use of high spatial resolution aerial photography from National Agricultural Imagery Program. It employs an expert knowledge decision rule set and incorporates the cadastral GIS vector layer as auxiliary data. The classification rule was established on a hierarchical image object network, and the properties of parcels in the vector layer were used to establish land cover types. Image segmentations were initially utilized to separate the aerial photos into parcel sized objects, and were further used for detailed land type identification within the parcels. Characteristics of image objects from contextual and geometrical aspects were used in the decision rule set to reduce the spectral limitation of the four-band aerial photography. Classification results include 12 land-cover classes and subclasses that may be assessed from the sub-parcel to the landscape scales, facilitating examination of scale dynamics. The proposed object-based classification method provides robust results, uses minimal and readily available ancillary data, and reduces computational time.
Integration of aerial remote sensing imaging data in a 3D-GIS environment
NASA Astrophysics Data System (ADS)
Moeller, Matthias S.
2003-03-01
For some years sensor systems have been available providing digital images of a new quality. Especially aerial stereo scanners acquire digital multispectral images with an extremely high ground resolution of about 0.10 - 0.15m and provide in addition a Digital Surface Models (DSM). These imaging products both can be used for a detailed monitoring at scales up to 1:500. The processed georeferenced multispectral orthoimages can be readily integrated into GIS making them useful for a number of applications. The DSM, derived from forward and backward facing sensors of an aerial imaging system provides a ground resolution of 0.5 m and can be used for 3D visualization purposes. In some cases it is essential, to store the ground elevation as a Digital Terrain Model (DTM) and also the height of 3-dimensional objects in a separated database. Existing automated algorithms do not work precise for the extraction of DTM from aerial scanner DSM. This paper presents a new approach which combines the visible image data and the DSM data for the generation of DTM with a reliable geometric accuracy. Already existing cadastral data can be used as a knowledge base for the extraction of building heights in cities. These elevation data is the essential source for a GIS based urban information system with a 3D visualization component.
Aerial 3D display by use of a 3D-shaped screen with aerial imaging by retro-reflection (AIRR)
NASA Astrophysics Data System (ADS)
Kurokawa, Nao; Ito, Shusei; Yamamoto, Hirotsugu
2017-06-01
The purpose of this paper is to realize an aerial 3D display. We design optical system that employs a projector below a retro-reflector and a 3D-shaped screen. A floating 3D image is formed with aerial imaging by retro-reflection (AIRR). Our proposed system is composed of a 3D-shaped screen, a projector, a quarter-wave retarder, a retro-reflector, and a reflective polarizer. Because AIRR forms aerial images that are plane-symmetric of the light sources regarding the reflective polarizer, the shape of the 3D screen is inverted from a desired aerial 3D image. In order to expand viewing angle, the 3D-shaped screen is surrounded by a retro-reflector. In order to separate the aerial image from reflected lights on the retro- reflector surface, the retro-reflector is tilted by 30 degrees. A projector is located below the retro-reflector at the same height of the 3D-shaped screen. The optical axis of the projector is orthogonal to the 3D-shaped screen. Scattered light on the 3D-shaped screen forms the aerial 3D image. In order to demonstrate the proposed optical design, a corner-cube-shaped screen is used for the 3D-shaped screen. Thus, the aerial 3D image is a cube that is floating above the reflective polarizer. For example, an aerial green cube is formed by projecting a calculated image on the 3D-shaped screen. The green cube image is digitally inverted in depth by our developed software. Thus, we have succeeded in forming aerial 3D image with our designed optical system.
Vehicle detection in aerial surveillance using dynamic Bayesian networks.
Cheng, Hsu-Yung; Weng, Chih-Chia; Chen, Yi-Ying
2012-04-01
We present an automatic vehicle detection system for aerial surveillance in this paper. In this system, we escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixelwise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixelwise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and nonvehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixelwise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Unmanned aerial vehicle: A unique platform for low-altitude remote sensing for crop management
USDA-ARS?s Scientific Manuscript database
Unmanned aerial vehicles (UAV) provide a unique platform for remote sensing to monitor crop fields that complements remote sensing from satellite, aircraft and ground-based platforms. The UAV-based remote sensing is versatile at ultra-low altitude to be able to provide an ultra-high-resolution imag...
Gaussian mixed model in support of semiglobal matching leveraged by ground control points
NASA Astrophysics Data System (ADS)
Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li
2017-04-01
Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.
Very high resolution aerial films
NASA Astrophysics Data System (ADS)
Becker, Rolf
1986-11-01
The use of very high resolution aerial films in aerial photography is evaluated. Commonly used panchromatic, color, and CIR films and their high resolution equivalents are compared. Based on practical experience and systematic investigations, the very high image quality and improved height accuracy that can be achieved using these films are demonstrated. Advantages to be gained from this improvement and operational restrictions encountered when using high resolution film are discussed.
NASA Astrophysics Data System (ADS)
Ham, S.; Oh, Y.; Choi, K.; Lee, I.
2018-05-01
Detecting unregistered buildings from aerial images is an important task for urban management such as inspection of illegal buildings in green belt or update of GIS database. Moreover, the data acquisition platform of photogrammetry is evolving from manned aircraft to UAVs (Unmanned Aerial Vehicles). However, it is very costly and time-consuming to detect unregistered buildings from UAV images since the interpretation of aerial images still relies on manual efforts. To overcome this problem, we propose a system which automatically detects unregistered buildings from UAV images based on deep learning methods. Specifically, we train a deconvolutional network with publicly opened geospatial data, semantically segment a given UAV image into a building probability map and compare the building map with existing GIS data. Through this procedure, we could detect unregistered buildings from UAV images automatically and efficiently. We expect that the proposed system can be applied for various urban management tasks such as monitoring illegal buildings or illegal land-use change.
D Surface Generation from Aerial Thermal Imagery
NASA Astrophysics Data System (ADS)
Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.
2015-12-01
Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.
Spectral Imaging from Uavs Under Varying Illumination Conditions
NASA Astrophysics Data System (ADS)
Hakala, T.; Honkavaara, E.; Saari, H.; Mäkynen, J.; Kaivosoja, J.; Pesonen, L.; Pölönen, I.
2013-08-01
Rapidly developing unmanned aerial vehicles (UAV) have provided the remote sensing community with a new rapidly deployable tool for small area monitoring. The progress of small payload UAVs has introduced greater demand for light weight aerial payloads. For applications requiring aerial images, a simple consumer camera provides acceptable data. For applications requiring more detailed spectral information about the surface, a new Fabry-Perot interferometer based spectral imaging technology has been developed. This new technology produces tens of successive images of the scene at different wavelength bands in very short time. These images can be assembled in spectral data cubes with stereoscopic overlaps. On field the weather conditions vary and the UAV operator often has to decide between flight in sub optimal conditions and no flight. Our objective was to investigate methods for quantitative radiometric processing of images taken under varying illumination conditions, thus expanding the range of weather conditions during which successful imaging flights can be made. A new method that is based on insitu measurement of irradiance either in UAV platform or in ground was developed. We tested the methods in a precision agriculture application using realistic data collected in difficult illumination conditions. Internal homogeneity of the original image data (average coefficient of variation in overlapping images) was 0.14-0.18. In the corrected data, the homogeneity was 0.10-0.12 with a correction based on broadband irradiance measured in UAV, 0.07-0.09 with a correction based on spectral irradiance measurement on ground, and 0.05-0.08 with a radiometric block adjustment based on image data. Our results were very promising, indicating that quantitative UAV based remote sensing could be operational in diverse conditions, which is prerequisite for many environmental remote sensing applications.
First demonstration of aerial gamma-ray imaging using drone for prompt radiation survey in Fukushima
NASA Astrophysics Data System (ADS)
Mochizuki, S.; Kataoka, J.; Tagawa, L.; Iwamoto, Y.; Okochi, H.; Katsumi, N.; Kinno, S.; Arimoto, M.; Maruhashi, T.; Fujieda, K.; Kurihara, T.; Ohsuka, S.
2017-11-01
Considerable amounts of radioactive substances (mainly 137Cs and 134Cs) were released into the environment after the Japanese nuclear disaster in 2011. Some restrictions on residence areas were lifted in April 2017, owing to the successive and effective decontamination operations. However, the distribution of radioactive substances in vast areas of mountain, forest and satoyama close to the city is still unknown; thus, decontamination operations in such areas are being hampered. In this paper, we report on the first aerial gamma-ray imaging of a schoolyard in Fukushima using a drone that carries a high sensitivity Compton camera. We show that the distribution of 137Cs in regions with a diameter of several tens to a hundred meters can be imaged with a typical resolution of 2-5 m within a 10-20 min flights duration. The aerial gamma-ray images taken 10 m and 20 m above the ground are qualitatively consistent with a dose map reconstructed from the ground-based measurements using a survey meter. Although further quantification is needed for the distance and air-absorption corrections to derive in situ dose map, such an aerial drone system can reduce measurement time by a factor of ten and is suitable for place where ground-based measurement are difficult.
Vehicle Detection of Aerial Image Using TV-L1 Texture Decomposition
NASA Astrophysics Data System (ADS)
Wang, Y.; Wang, G.; Li, Y.; Huang, Y.
2016-06-01
Vehicle detection from high-resolution aerial image facilitates the study of the public traveling behavior on a large scale. In the context of road, a simple and effective algorithm is proposed to extract the texture-salient vehicle among the pavement surface. Texturally speaking, the majority of pavement surface changes a little except for the neighborhood of vehicles and edges. Within a certain distance away from the given vector of the road network, the aerial image is decomposed into a smoothly-varying cartoon part and an oscillatory details of textural part. The variational model of Total Variation regularization term and L1 fidelity term (TV-L1) is adopted to obtain the salient texture of vehicles and the cartoon surface of pavement. To eliminate the noise of texture decomposition, regions of pavement surface are refined by seed growing and morphological operation. Based on the shape saliency analysis of the central objects in those regions, vehicles are detected as the objects of rectangular shape saliency. The proposed algorithm is tested with a diverse set of aerial images that are acquired at various resolution and scenarios around China. Experimental results demonstrate that the proposed algorithm can detect vehicles at the rate of 71.5% and the false alarm rate of 21.5%, and that the speed is 39.13 seconds for a 4656 x 3496 aerial image. It is promising for large-scale transportation management and planning.
Aerial vehicles collision avoidance using monocular vision
NASA Astrophysics Data System (ADS)
Balashov, Oleg; Muraviev, Vadim; Strotov, Valery
2016-10-01
In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.
DOT National Transportation Integrated Search
2012-02-01
For rapid deployment of bridge scan missions, sub-inch aerial imaging using small format aerial photography : is suggested. Under-belly photography is used to generate high resolution aerial images that can be geo-referenced and : used for quantifyin...
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung
2016-01-01
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung
2016-10-25
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.
Model-based conifer crown surface reconstruction from multi-ocular high-resolution aerial imagery
NASA Astrophysics Data System (ADS)
Sheng, Yongwei
2000-12-01
Tree crown parameters such as width, height, shape and crown closure are desirable in forestry and ecological studies, but they are time-consuming and labor intensive to measure in the field. The stereoscopic capability of high-resolution aerial imagery provides a way to crown surface reconstruction. Existing photogrammetric algorithms designed to map terrain surfaces, however, cannot adequately extract crown surfaces, especially for steep conifer crowns. Considering crown surface reconstruction in a broader context of tree characterization from aerial images, we develop a rigorous perspective tree image formation model to bridge image-based tree extraction and crown surface reconstruction, and an integrated model-based approach to conifer crown surface reconstruction. Based on the fact that most conifer crowns are in a solid geometric form, conifer crowns are modeled as a generalized hemi-ellipsoid. Both the automatic and semi-automatic approaches are investigated to optimal tree model development from multi-ocular images. The semi-automatic 3D tree interpreter developed in this thesis is able to efficiently extract reliable tree parameters and tree models in complicated tree stands. This thesis starts with a sophisticated stereo matching algorithm, and incorporates tree models to guide stereo matching. The following critical problems are addressed in the model-based surface reconstruction process: (1) the problem of surface model composition from tree models, (2) the occlusion problem in disparity prediction from tree models, (3) the problem of integrating the predicted disparities into image matching, (4) the tree model edge effect reduction on the disparity map, (5) the occlusion problem in orthophoto production, and (6) the foreshortening problem in image matching, which is very serious for conifer crown surfaces. Solutions to the above problems are necessary for successful crown surface reconstruction. The model-based approach was applied to recover the canopy surface of a dense redwood stand using tri-ocular high-resolution images scanned from 1:2,400 aerial photographs. The results demonstrate the approach's ability to reconstruct complicated stands. The model-based approach proposed in this thesis is potentially applicable to other surfaces recovering problems with a priori knowledge about objects.
Aerial Images and Convolutional Neural Network for Cotton Bloom Detection.
Xu, Rui; Li, Changying; Paterson, Andrew H; Jiang, Yu; Sun, Shangpeng; Robertson, Jon S
2017-01-01
Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of -4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.
Application of machine learning for the evaluation of turfgrass plots using aerial images
NASA Astrophysics Data System (ADS)
Ding, Ke; Raheja, Amar; Bhandari, Subodh; Green, Robert L.
2016-05-01
Historically, investigation of turfgrass characteristics have been limited to visual ratings. Although relevant information may result from such evaluations, final inferences may be questionable because of the subjective nature in which the data is collected. Recent advances in computer vision techniques allow researchers to objectively measure turfgrass characteristics such as percent ground cover, turf color, and turf quality from the digital images. This paper focuses on developing a methodology for automated assessment of turfgrass quality from aerial images. Images of several turfgrass plots of varying quality were gathered using a camera mounted on an unmanned aerial vehicle. The quality of these plots were also evaluated based on visual ratings. The goal was to use the aerial images to generate quality evaluations on a regular basis for the optimization of water treatment. Aerial images are used to train a neural network so that appropriate features such as intensity, color, and texture of the turfgrass are extracted from these images. Neural network is a nonlinear classifier commonly used in machine learning. The output of the neural network trained model is the ratings of the grass, which is compared to the visual ratings. Currently, the quality and the color of turfgrass, measured as the greenness of the grass, are evaluated. The textures are calculated using the Gabor filter and co-occurrence matrix. Other classifiers such as support vector machines and simpler linear regression models such as Ridge regression and LARS regression are also used. The performance of each model is compared. The results show encouraging potential for using machine learning techniques for the evaluation of turfgrass quality and color.
Complex Building Detection Through Integrating LIDAR and Aerial Photos
NASA Astrophysics Data System (ADS)
Zhai, R.
2015-02-01
This paper proposes a new approach on digital building detection through the integration of LiDAR data and aerial imagery. It is known that most building rooftops are represented by different regions from different seed pixels. Considering the principals of image segmentation, this paper employs a new region based technique to segment images, combining both the advantages of LiDAR and aerial images together. First, multiple seed points are selected by taking several constraints into consideration in an automated way. Then, the region growing procedures proceed by combining the elevation attribute from LiDAR data, visibility attribute from DEM (Digital Elevation Model), and radiometric attribute from warped images in the segmentation. Through this combination, the pixels with similar height, visibility, and spectral attributes are merged into one region, which are believed to represent the whole building area. The proposed methodology was implemented on real data and competitive results were achieved.
Aerial image metrology for OPC modeling and mask qualification
NASA Astrophysics Data System (ADS)
Chen, Ao; Foong, Yee Mei; Thaler, Thomas; Buttgereit, Ute; Chung, Angeline; Burbine, Andrew; Sturtevant, John; Clifford, Chris; Adam, Kostas; De Bisschop, Peter
2017-06-01
As nodes become smaller and smaller, the OPC applied to enable these nodes becomes more and more sophisticated. This trend peaks today in curve-linear OPC approaches that are currently starting to appear on the roadmap. With this sophistication of OPC, the mask pattern complexity increases. CD-SEM based mask qualification strategies as they are used today are starting to struggle to provide a precise forecast of the printing behavior of a mask on wafer. An aerial image CD measurement performed on ZEISS Wafer-Level CD system (WLCD) is a complementary approach to mask CD-SEMs to judge the lithographical performance of the mask and its critical production features. The advantage of the aerial image is that it includes all optical effects of the mask such as OPC, SRAF, 3D mask effects, once the image is taken under scanner equivalent illumination conditions. Additionally, it reduces the feature complexity and analyzes the printing relevant CD.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
NASA Astrophysics Data System (ADS)
Laurenzis, Martin; Bacher, Emmanuel; Christnacher, Frank
2017-12-01
Laser imaging systems are prominent candidates for detection and tracking of small unmanned aerial vehicles (UAVs) in current and future security scenarios. Laser reflection characteristics for laser imaging (e.g., laser gated viewing) of small UAVs are investigated to determine their laser radar cross section (LRCS) by analyzing the intensity distribution of laser reflection in high resolution images. For the first time, LRCSs are determined in a combined experimental and computational approaches by high resolution laser gated viewing and three-dimensional rendering. An optimized simple surface model is calculated taking into account diffuse and specular reflectance properties based on the Oren-Nayar and the Cook-Torrance reflectance models, respectively.
Barta, András; Horváth, Gábor
2003-12-01
The apparent position, size, and shape of aerial objects viewed binocularly from water change as a result of the refraction of light at the water surface. Earlier studies of the refraction-distorted structure of the aerial binocular visual field of underwater observers were restricted to either vertically or horizontally oriented eyes. Here we calculate the position of the binocular image point of an aerial object point viewed by two arbitrarily positioned underwater eyes when the water surface is flat. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveae, the structure of the aerial binocular visual field is computed and visualized as a function of the relative positions of the eyes. We also analyze two erroneous representations of the underwater imaging of aerial objects that have occurred in the literature. It is demonstrated that the structure of the aerial binocular visual field of underwater observers distorted by refraction is more complex than has been thought previously.
Contrast matching of line gratings obtained with NXE3XXX and EUV- interference lithography
NASA Astrophysics Data System (ADS)
Tasdemir, Zuhal; Mochi, Iacopo; Olvera, Karen Garrido; Meeuwissen, Marieke; Yildirim, Oktay; Custers, Rolf; Hoefnagels, Rik; Rispens, Gijsbert; Fallica, Roberto; Vockenhuber, Michaela; Ekinci, Yasin
2017-10-01
Extreme UV lithography (EUVL) has gained considerable attention for several decades as a potential technology for the semiconductor industry and it is now close to being adopted in high-volume manufacturing. At Paul Scherrer Institute (PSI), we have focused our attention on EUV resist performance issues by testing available high-performance EUV resists in the framework of a joint collaboration with ASML. For this purpose, we use the grating-based EUV-IL setup installed at the Swiss Light Source (SLS) at PSI, in which a coherent beam with 13.5 nm wavelength is used to produce a periodic aerial image with virtually 100% contrast and large depth of focus. Interference lithography is a relatively simple technique and it does not require many optical components, therefore the unintended flare is minimized and the aerial image is well-defined sinusoidal pattern. For the collaborative work between PSI and ASML, exposures are being performed on the EUV-IL exposure tool at PSI. For better quantitative comparison to the NXE scanner results, it is targeted to determine the actual NILS of the EUV-IL exposure tool at PSI. Ultimately, any resist-related metrology must be aligned and compared with the performance of EUV scanners. Moreover, EUV-IL is a powerful method for evaluating the resist performance and a resist which performs well with EUV-IL, shows, in general, also good performance with NXE scanners. However, a quantitative prediction of the performance based on EUV-IL measurements has not been possible due to the differences in aerial image formation. In this work, we aim to study the performance of EUV resists with different aerial images. For this purpose, after the real interference pattern exposure, we overlay a flat field exposure to emulate different levels of contrast. Finally, the results are compared with data obtained from EUV scanner. This study will enable not only match the data obtained from EUV- IL at PSI with the performance of NXE scanners, but also a better understanding of resist fundamentals by studying the effects of the aerial image on resist performance by changing the aerial image contrast in a controlled manner using EUV-IL.
Density estimation in aerial images of large crowds for automatic people counting
NASA Astrophysics Data System (ADS)
Herrmann, Christian; Metzler, Juergen
2013-05-01
Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.
NASA Astrophysics Data System (ADS)
Chen, C.; Gong, W.; Hu, Y.; Chen, Y.; Ding, Y.
2017-05-01
The automated building detection in aerial images is a fundamental problem encountered in aerial and satellite images analysis. Recently, thanks to the advances in feature descriptions, Region-based CNN model (R-CNN) for object detection is receiving an increasing attention. Despite the excellent performance in object detection, it is problematic to directly leverage the features of R-CNN model for building detection in single aerial image. As we know, the single aerial image is in vertical view and the buildings possess significant directional feature. However, in R-CNN model, direction of the building is ignored and the detection results are represented by horizontal rectangles. For this reason, the detection results with horizontal rectangle cannot describe the building precisely. To address this problem, in this paper, we proposed a novel model with a key feature related to orientation, namely, Oriented R-CNN (OR-CNN). Our contributions are mainly in the following two aspects: 1) Introducing a new oriented layer network for detecting the rotation angle of building on the basis of the successful VGG-net R-CNN model; 2) the oriented rectangle is proposed to leverage the powerful R-CNN for remote-sensing building detection. In experiments, we establish a complete and bran-new data set for training our oriented R-CNN model and comprehensively evaluate the proposed method on a publicly available building detection data set. We demonstrate State-of-the-art results compared with the previous baseline methods.
Efficient content-based low-altitude images correlated network and strips reconstruction
NASA Astrophysics Data System (ADS)
He, Haiqing; You, Qi; Chen, Xiaoyong
2017-01-01
The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
ERIC Educational Resources Information Center
Bond, William Glenn
2012-01-01
In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…
Using drone-mounted cameras for on-site body documentation: 3D mapping and active survey.
Urbanová, Petra; Jurda, Mikoláš; Vojtíšek, Tomáš; Krajsa, Jan
2017-12-01
Recent advances in unmanned aerial technology have substantially lowered the cost associated with aerial imagery. As a result, forensic practitioners are today presented with easy low-cost access to aerial photographs at remote locations. The present paper aims to explore boundaries in which the low-end drone technology can operate as professional crime scene equipment, and to test the prospects of aerial 3D modeling in the forensic context. The study was based on recent forensic cases of falls from height admitted for postmortem examinations. Three mock outdoor forensic scenes featuring a dummy, skeletal remains and artificial blood were constructed at an abandoned quarry and subsequently documented using a commercial DJI Phantom 2 drone equipped with a GoPro HERO 4 digital camera. In two of the experiments, the purpose was to conduct aerial and ground-view photography and to process the acquired images with a photogrammetry protocol (using Agisoft PhotoScan ® 1.2.6) in order to generate 3D textured models. The third experiment tested the employment of drone-based video recordings in mapping scattered body parts. The results show that drone-based aerial photography is capable of producing high-quality images, which are appropriate for building accurate large-scale 3D models of a forensic scene. If, however, high-resolution top-down three-dimensional scene documentation featuring details on a corpse or other physical evidence is required, we recommend building a multi-resolution model by processing aerial and ground-view imagery separately. The video survey showed that using an overview recording for seeking out scattered body parts was efficient. In contrast, the less easy-to-spot evidence, such as bloodstains, was detected only after having been marked properly with crime scene equipment. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Young, Larry A.; Pisanich, Gregory; Ippolito, Corey; Alena, Rick
2005-01-01
The objective of this paper is to review the anticipated imaging and remote-sensing technology requirements for aerial vehicle survey missions to other planetary bodies in our Solar system that can support in-atmosphere flight. In the not too distant future such planetary aerial vehicle (a.k.a. aerial explorers) exploration missions will become feasible. Imaging and remote-sensing observations will be a key objective for these missions. Accordingly, it is imperative that optimal solutions in terms of imaging acquisition and real-time autonomous analysis of image data sets be developed for such vehicles.
Unmanned Aerial Systems and Spectroscopy for Remote Sensing Applications in Archaeology
NASA Astrophysics Data System (ADS)
Themistocleous, K.; Agapiou, A.; Cuca, B.; Hadjimitsis, D. G.
2015-04-01
Remote sensing has open up new dimensions in archaeological research. Although there has been significant progress in increasing the resolution of space/aerial sensors and image processing, the detection of the crop (and soil marks) formations, which relate to buried archaeological remains, are difficult to detect since these marks may not be visible in the images if observed over different period or at different spatial/spectral resolution. In order to support the improvement of earth observation remote sensing technologies specifically targeting archaeological research, a better understanding of the crop/soil marks formation needs to be studied in detail. In this paper the contribution of both Unmanned Aerial Systems as well ground spectroradiometers is discussed in a variety of examples applied in the eastern Mediterranean region (Cyprus and Greece) as well in Central Europe (Hungary). In- situ spectroradiometric campaigns can be applied for the removal of atmospheric impact to simultaneous satellite overpass images. In addition, as shown in this paper, the systematic collection of ground truth data prior to the satellite/aerial acquisition can be used to detect the optimum temporal and spectral resolution for the detection of stress vegetation related to buried archaeological remains. Moreover, phenological studies of the crops from the area of interest can be simulated to the potential sensors based on their Relative Response Filters and therefore prepare better the satellite-aerial campaigns. Ground data and the use of Unmanned Aerial Systems (UAS) can provide an increased insight for studying the formation of crop and soil marks. New algorithms such as vegetation indices and linear orthogonal equations for the enhancement of crop marks can be developed based on the specific spectral characteristics of the area. As well, UAS can be used for remote sensing applications in order to document, survey and model cultural heritage and archaeological sites.
NASA Astrophysics Data System (ADS)
Siok, Katarzyna; Jenerowicz, Agnieszka; Woroszkiewicz, Małgorzata
2017-07-01
Archival aerial photographs are often the only reliable source of information about the area. However, these data are single-band data that do not allow unambiguous detection of particular forms of land cover. Thus, the authors of this article seek to develop a method of coloring panchromatic aerial photographs, which enable increasing the spectral information of such images. The study used data integration algorithms based on pansharpening, implemented in commonly used remote sensing programs: ERDAS, ENVI, and PCI. Aerial photos and Landsat multispectral data recorded in 1987 and 2016 were chosen. This study proposes the use of modified intensity-hue-saturation and Brovey methods. The use of these methods enabled the addition of red-green-blue (RGB) components to monochrome images, thus enhancing their interpretability and spectral quality. The limitations of the proposed method relate to the availability of RGB satellite imagery, the accuracy of mutual orientation of the aerial and the satellite data, and the imperfection of archival aerial photographs. Therefore, it should be expected that the results of coloring will not be perfect compared to the results of the fusion of recent data with a similar ground sampling resolution, but still, they will allow a more accurate and efficient classification of land cover registered on archival aerial photographs.
Draper Laboratory small autonomous aerial vehicle
NASA Astrophysics Data System (ADS)
DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.
1997-06-01
The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.
Initial Efforts toward Mission-Representative Imaging Surveys from Aerial Explorers
NASA Technical Reports Server (NTRS)
Pisanich, Greg; Plice, Laura; Ippolito, Corey; Young, Larry A.; Lau, Benton; Lee, Pascal
2004-01-01
Numerous researchers have proposed the use of robotic aerial explorers to perform scientific investigation of planetary bodies in our solar system. One of the essential tasks for any aerial explorer is to be able to perform scientifically valuable imaging surveys. The focus of this paper is to discuss the challenges implicit in, and recent observations related to, acquiring mission-representative imaging data from a small fixed-wing UAV, acting as a surrogate planetary aerial explorer. This question of successfully performing aerial explorer surveys is also tied to other topics of technical investigation, including the development of unique bio-inspired technologies.
Transforming the Geocomputational Battlespace Framework with HDF5
2010-08-01
layout level, dataset arrays can be stored in chunks or tiles , enabling fast subsetting of large datasets, including compressed datasets. HDF software...Image Base (CIB) image of the AOI: an orthophoto made from rectified grayscale aerial images b. An IKONOS satellite image made up of 3 spectral
NASA Astrophysics Data System (ADS)
Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui
2017-01-01
A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.
Maxwell, Susan K.
2010-01-01
Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. PMID:21135917
Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System
2015-03-01
PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS Kyle P. Werner, 2Lt, USAF AFIT-ENG-MS-15-M-048...REFUELING FROM A STEREO IMAGING SYSTEM THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS
NASA Astrophysics Data System (ADS)
Fahey, R. T.; Tallant, J.; Gough, C. M.; Hardiman, B. S.; Atkins, J.; Scheuermann, C. M.
2016-12-01
Canopy structure can be an important driver of forest ecosystem functioning - affecting factors such as radiative transfer and light use efficiency, and consequently net primary production (NPP). Both above- (aerial) and below-canopy (terrestrial) remote sensing techniques are used to assess canopy structure and each has advantages and disadvantages. Aerial techniques can cover large geographical areas and provide detailed information on canopy surface and canopy height, but are generally unable to quantitatively assess interior canopy structure. Terrestrial methods provide high resolution information on interior canopy structure and can be cost-effectively repeated, but are limited to very small footprints. Although these methods are often utilized to derive similar metrics (e.g., rugosity, LAI) and to address equivalent ecological questions and relationships (e.g., link between LAI and productivity), rarely are inter-comparisons made between techniques. Our objective is to compare methods for deriving canopy structural complexity (CSC) metrics and to assess the capacity of commonly available aerial remote sensing products (and combinations) to match terrestrially-sensed data. We also assess the potential to combine CSC metrics with image-based analysis to predict plot-based NPP measurements in forests of different ages and different levels of complexity. We use combinations of data from drone-based imagery (RGB, NIR, Red Edge), aerial LiDAR (commonly available medium-density leaf-off), terrestrial scanning LiDAR, portable canopy LiDAR, and a permanent plot network - all collected at the University of Michigan Biological Station. Our results will highlight the potential for deriving functionally meaningful CSC metrics from aerial imagery, LiDAR, and combinations of data sources. We will also present results of modeling focused on predicting plot-level NPP from combinations of image-based vegetation indices (e.g., NDVI, EVI) with LiDAR- or image-derived metrics of CSC (e.g., rugosity, porosity), canopy density, (e.g., LAI), and forest structure (e.g., canopy height). This work builds toward future efforts that will use other data combinations, such as those available at NEON sites, and could be used to inform and test popular ecosystem models (e.g., ED2) incorporating structure.
Kim, So-Ra; Kwak, Doo-Ahn; Lee, Woo-Kyun; oLee, Woo-Kyun; Son, Yowhan; Bae, Sang-Won; Kim, Choonsig; Yoo, Seongjin
2010-07-01
The objective of this study was to estimate the carbon storage capacity of Pinus densiflora stands using remotely sensed data by combining digital aerial photography with light detection and ranging (LiDAR) data. A digital canopy model (DCM), generated from the LiDAR data, was combined with aerial photography for segmenting crowns of individual trees. To eliminate errors in over and under-segmentation, the combined image was smoothed using a Gaussian filtering method. The processed image was then segmented into individual trees using a marker-controlled watershed segmentation method. After measuring the crown area from the segmented individual trees, the individual tree diameter at breast height (DBH) was estimated using a regression function developed from the relationship observed between the field-measured DBH and crown area. The above ground biomass of individual trees could be calculated by an image-derived DBH using a regression function developed by the Korea Forest Research Institute. The carbon storage, based on individual trees, was estimated by simple multiplication using the carbon conversion index (0.5), as suggested in guidelines from the Intergovernmental Panel on Climate Change. The mean carbon storage per individual tree was estimated and then compared with the field-measured value. This study suggested that the biomass and carbon storage in a large forest area can be effectively estimated using aerial photographs and LiDAR data.
Maxwell, Susan K
2010-12-01
Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. Copyright © 2010. Published by Elsevier Ltd.
Powers, P.S.; Chiarle, M.; Savage, W.Z.
1996-01-01
The traditional approach to making aerial photographic measurements uses analog or analytic photogrammetric equipment. We have developed a digital method for making measurements from aerial photographs which uses geographic information system (GIS) software, and primarily DOS-based personal computers. This method, which is based on the concept that a direct visual comparison can be made between images derived from two sets of aerial photographs taken at different times, was applied to the surface of the active portion of the Slumgullion earthflow in Colorado to determine horizontal displacement vectors from the movements of visually identifiable objects, such as trees and large rocks. Using this method, more of the slide surface can be mapped in a shorter period of time than using the standard photogrammetric approach. More than 800 horizontal displacement vectors were determined on the active earthflow surface using images produced by our digital photogrammetric technique and 1985 (1:12,000-scale) and 1990 (1:6,000-scale) aerial photographs. The resulting displacement field shows, with a 2-m measurement error (??? 10%), that the fastest moving portion of the landslide underwent 15-29 m of horizontal displacement between 1985 and 1990. Copyright ?? 1996 Elsevier Science Ltd.
Horváth, Gábor; Buchta, Krisztián; Varjú, Dezsö
2003-06-01
It is a well-known phenomenon that when we look into the water with two aerial eyes, both the apparent position and the apparent shape of underwater objects are different from the real ones because of refraction at the water surface. Earlier studies of the refraction-distorted structure of the underwater binocular visual field of aerial observers were restricted to either vertically or horizontally oriented eyes. We investigate a generalized version of this problem: We calculate the position of the binocular image point of an underwater object point viewed by two arbitrarily positioned aerial eyes, including oblique orientations of the eyes relative to the flat water surface. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveas, the structure of the underwater binocular visual field is computed and visualized in different ways as a function of the relative positions of the eyes. We show that a revision of certain earlier treatments of the aerial imaging of underwater objects is necessary. We analyze and correct some widespread erroneous or incomplete representations of this classical geometric optical problem that occur in different textbooks. Improving the theory of aerial binocular imaging of underwater objects, we demonstrate that the structure of the underwater binocular visual field of aerial observers distorted by refraction is more complex than has been thought previously.
Ultramap v3 - a Revolution in Aerial Photogrammetry
NASA Astrophysics Data System (ADS)
Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.
2012-07-01
In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.
The Limited Duty/Chief Warrant Officer Professional Guidebook
1985-01-01
subsurface imaging . They plan and manage the operation of imaging commands and activities, combat camera groups and aerial reconnaissance imaging...picture and video systems used in aerial, surface and subsurface imaging . They supervise the operation of imaging commands and activities, combat camera
Embedded, real-time UAV control for improved, image-based 3D scene reconstruction
Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul
2016-01-01
Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...
An algorithm for approximate rectification of digital aerial images
USDA-ARS?s Scientific Manuscript database
High-resolution aerial photography is one of the most valuable tools available for managing extensive landscapes. With recent advances in digital camera technology, computer hardware, and software, aerial photography is easier to collect, store, and transfer than ever before. Images can be automa...
Feature-based registration of historical aerial images by Area Minimization
NASA Astrophysics Data System (ADS)
Nagarajan, Sudhagar; Schenk, Toni
2016-06-01
The registration of historical images plays a significant role in assessing changes in land topography over time. By comparing historical aerial images with recent data, geometric changes that have taken place over the years can be quantified. However, the lack of ground control information and precise camera parameters has limited scientists' ability to reliably incorporate historical images into change detection studies. Other limitations include the methods of determining identical points between recent and historical images, which has proven to be a cumbersome task due to continuous land cover changes. Our research demonstrates a method of registering historical images using Time Invariant Line (TIL) features. TIL features are different representations of the same line features in multi-temporal data without explicit point-to-point or straight line-to-straight line correspondence. We successfully determined the exterior orientation of historical images by minimizing the area formed between corresponding TIL features in recent and historical images. We then tested the feasibility of the approach with synthetic and real data and analyzed the results. Based on our analysis, this method shows promise for long-term 3D change detection studies.
Estimating occupancy and abundance using aerial images with imperfect detection
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.
2017-01-01
Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.
Aerial image databases for pipeline rights-of-way management
NASA Astrophysics Data System (ADS)
Jadkowski, Mark A.
1996-03-01
Pipeline companies that own and manage extensive rights-of-way corridors are faced with ever-increasing regulatory pressures, operating issues, and the need to remain competitive in today's marketplace. Automation has long been an answer to the problem of having to do more work with less people, and Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) solutions have been implemented at several pipeline companies. Until recently, the ability to cost-effectively acquire and incorporate up-to-date aerial imagery into these computerized systems has been out of the reach of most users. NASA's Earth Observations Commercial Applications Program (EOCAP) is providing a means by which pipeline companies can bridge this gap. The EOCAP project described in this paper includes a unique partnership with NASA and James W. Sewall Company to develop an aircraft-mounted digital camera system and a ground-based computer system to geometrically correct and efficiently store and handle the digital aerial images in an AM/FM/GIS environment. This paper provides a synopsis of the project, including details on (1) the need for aerial imagery, (2) NASA's interest and role in the project, (3) the design of a Digital Aerial Rights-of-Way Monitoring System, (4) image georeferencing strategies for pipeline applications, and (5) commercialization of the EOCAP technology through a prototype project at Algonquin Gas Transmission Company which operates major gas pipelines in New England, New York, and New Jersey.
Popescu, Dan; Ichim, Loretta; Stoican, Florin
2017-02-23
Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes-fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms.
Popescu, Dan; Ichim, Loretta; Stoican, Florin
2017-01-01
Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes—fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms. PMID:28241479
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Metric Aspects of Digital Images and Digital Image Processing.
1984-09-01
produced in a reconstructed digital image. Synthesized aerial photographs were formed by processing a combined elevation and orthophoto data base. These...brightness values h1 and Iion b) a line equation whose two parameters are calculated h12, along with tile borderline that separates the two intensity
Evaluation of experimental UAV video change detection
NASA Astrophysics Data System (ADS)
Bartelsen, J.; Saur, G.; Teutsch, C.
2016-10-01
During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.
NASA Astrophysics Data System (ADS)
Heller, Andrew Roland
The Fort Clark State Historic Site (32ME2) is a well known site on the upper Missouri River, North Dakota. The site was the location of two Euroamerican trading posts and a large Mandan-Arikara earthlodge village. In 2004, Dr. Kenneth L. Kvamme and Dr. Tommy Hailey surveyed the site using aerial color and thermal infrared imagery collected from a powered parachute. Individual images were stitched together into large image mosaics and registered to Wood's 1993 interpretive map of the site using Adobe Photoshop. The analysis of those image mosaics resulted in the identification of more than 1,500 archaeological features, including as many as 124 earthlodges.
Detection of Aspens Using High Resolution Aerial Laser Scanning Data and Digital Aerial Images
Säynäjoki, Raita; Packalén, Petteri; Maltamo, Matti; Vehmas, Mikko; Eerikäinen, Kalle
2008-01-01
The aim was to use high resolution Aerial Laser Scanning (ALS) data and aerial images to detect European aspen (Populus tremula L.) from among other deciduous trees. The field data consisted of 14 sample plots of 30 m × 30 m size located in the Koli National Park in the North Karelia, Eastern Finland. A Canopy Height Model (CHM) was interpolated from the ALS data with a pulse density of 3.86/m2, low-pass filtered using Height-Based Filtering (HBF) and binarized to create the mask needed to separate the ground pixels from the canopy pixels within individual areas. Watershed segmentation was applied to the low-pass filtered CHM in order to create preliminary canopy segments, from which the non-canopy elements were extracted to obtain the final canopy segmentation, i.e. the ground mask was analysed against the canopy mask. A manual classification of aerial images was employed to separate the canopy segments of deciduous trees from those of coniferous trees. Finally, linear discriminant analysis was applied to the correctly classified canopy segments of deciduous trees to classify them into segments belonging to aspen and those belonging to other deciduous trees. The independent variables used in the classification were obtained from the first pulse ALS point data. The accuracy of discrimination between aspen and other deciduous trees was 78.6%. The independent variables in the classification function were the proportion of vegetation hits, the standard deviation of in pulse heights, accumulated intensity at the 90th percentile and the proportion of laser points reflected at the 60th height percentile. The accuracy of classification corresponded to the validation results of earlier ALS-based studies on the classification of individual deciduous trees to tree species. PMID:27873799
NASA Astrophysics Data System (ADS)
Melin, M.; Korhonen, L.; Kukkonen, M.; Packalen, P.
2017-07-01
Canopy cover (CC) is a variable used to describe the status of forests and forested habitats, but also the variable used primarily to define what counts as a forest. The estimation of CC has relied heavily on remote sensing with past studies focusing on satellite imagery as well as Airborne Laser Scanning (ALS) using light detection and ranging (lidar). Of these, ALS has been proven highly accurate, because the fraction of pulses penetrating the canopy represents a direct measurement of canopy gap percentage. However, the methods of photogrammetry can be applied to produce point clouds fairly similar to airborne lidar data from aerial images. Currently there is little information about how well such point clouds measure canopy density and gaps. The aim of this study was to assess the suitability of aerial image point clouds for CC estimation and compare the results with those obtained using spectral data from aerial images and Landsat 5. First, we modeled CC for n = 1149 lidar plots using field-measured CCs and lidar data. Next, this data was split into five subsets in north-south direction (y-coordinate). Finally, four CC models (AerialSpectral, AerialPointcloud, AerialCombi (spectral + pointcloud) and Landsat) were created and they were used to predict new CC values to the lidar plots, subset by subset, using five-fold cross validation. The Landsat and AerialSpectral models performed with RMSEs of 13.8% and 12.4%, respectively. AerialPointcloud model reached an RMSE of 10.3%, which was further improved by the inclusion of spectral data; RMSE of the AerialCombi model was 9.3%. We noticed that the aerial image point clouds managed to describe only the outermost layer of the canopy and missed the details in lower canopy, which was resulted in weak characterization of the total CC variation, especially in the tails of the data.
Comparison of SLAR images and small-scale, low-sun aerial photographs.
NASA Technical Reports Server (NTRS)
Clark, M. M.
1971-01-01
A comparison of side-looking airborne radar (SLAR) images and black and white aerial photos of similar scale and illumination of an area in the Mojave Desert of California shows that aerial photos yield far more information about geology than do SLAR images because of greater resolution, tonal range, and geometric fidelity, and easier use in stereo. Nevertheless, radar can differentiate some materials or surfaces that aerial photos cannot; thus, they should be considered as complementary, rather than competing tools in geologic investigations. The most significant advantage of SLAR, however, is its freedom from the stringent conditions of weather, date, and time that are required by small-scale aerial photos taken with a specified direction and angle of illumination. Indeed, in low latitudes, SLAR is the only way to obtain small-scale images with low illumination from certain directions; moreover, in areas of nearly continuous cloudiness, radar may be the only practical source of small-scale images.
Small unmanned aerial vehicles (micro-UAVs, drones) in plant ecology.
Cruzan, Mitchell B; Weinstein, Ben G; Grasty, Monica R; Kohrn, Brendan F; Hendrickson, Elizabeth C; Arredondo, Tina M; Thompson, Pamela G
2016-09-01
Low-elevation surveys with small aerial drones (micro-unmanned aerial vehicles [UAVs]) may be used for a wide variety of applications in plant ecology, including mapping vegetation over small- to medium-sized regions. We provide an overview of methods and procedures for conducting surveys and illustrate some of these applications. Aerial images were obtained by flying a small drone along transects over the area of interest. Images were used to create a composite image (orthomosaic) and a digital surface model (DSM). Vegetation classification was conducted manually and using an automated routine. Coverage of an individual species was estimated from aerial images. We created a vegetation map for the entire region from the orthomosaic and DSM, and mapped the density of one species. Comparison of our manual and automated habitat classification confirmed that our mapping methods were accurate. A species with high contrast to the background matrix allowed adequate estimate of its coverage. The example surveys demonstrate that small aerial drones are capable of gathering large amounts of information on the distribution of vegetation and individual species with minimal impact to sensitive habitats. Low-elevation aerial surveys have potential for a wide range of applications in plant ecology.
NASA Astrophysics Data System (ADS)
Panagiotopoulou, Antigoni; Bratsolis, Emmanuel; Charou, Eleni; Perantonis, Stavros
2017-10-01
The detailed three-dimensional modeling of buildings utilizing elevation data, such as those provided by light detection and ranging (LiDAR) airborne scanners, is increasingly demanded today. There are certain application requirements and available datasets to which any research effort has to be adapted. Our dataset includes aerial orthophotos, with a spatial resolution 20 cm, and a digital surface model generated from LiDAR, with a spatial resolution 1 m and an elevation resolution 20 cm, from an area of Athens, Greece. The aerial images are fused with LiDAR, and we classify these data with a multilayer feedforward neural network for building block extraction. The innovation of our approach lies in the preprocessing step in which the original LiDAR data are super-resolution (SR) reconstructed by means of a stochastic regularized technique before their fusion with the aerial images takes place. The Lorentzian estimator combined with the bilateral total variation regularization performs the SR reconstruction. We evaluate the performance of our approach against that of fusing unprocessed LiDAR data with aerial images. We present the classified images and the statistical measures confusion matrix, kappa coefficient, and overall accuracy. The results demonstrate that our approach predominates over that of fusing unprocessed LiDAR data with aerial images.
1983-09-01
Report Al-TR-346. Artifcial Intelligence Laboratory, Mamachusetts Institute of Tech- niugy. Cambridge, Mmeh mett. June 19 [G.usmn@ A. Gaman-Arenas...Testbed Coordinator, 415/859-4395 Artificial Intelligence Center Computer Science and Technology Division Prepared for: Defense Advanced Research...to support processing of aerial photographs for such military applications as cartography, Intelligence , weapon guidance, and targeting. A key
Toward autonomous avian-inspired grasping for micro aerial vehicles.
Thomas, Justin; Loianno, Giuseppe; Polin, Joseph; Sreenath, Koushil; Kumar, Vijay
2014-06-01
Micro aerial vehicles, particularly quadrotors, have been used in a wide range of applications. However, the literature on aerial manipulation and grasping is limited and the work is based on quasi-static models. In this paper, we draw inspiration from agile, fast-moving birds such as raptors, that are able to capture moving prey on the ground or in water, and develop similar capabilities for quadrotors. We address dynamic grasping, an approach to prehensile grasping in which the dynamics of the robot and its gripper are significant and must be explicitly modeled and controlled for successful execution. Dynamic grasping is relevant for fast pick-and-place operations, transportation and delivery of objects, and placing or retrieving sensors. We show how this capability can be realized (a) using a motion capture system and (b) without external sensors relying only on onboard sensors. In both cases we describe the dynamic model, and trajectory planning and control algorithms. In particular, we present a methodology for flying and grasping a cylindrical object using feedback from a monocular camera and an inertial measurement unit onboard the aerial robot. This is accomplished by mapping the dynamics of the quadrotor to a level virtual image plane, which in turn enables dynamically-feasible trajectory planning for image features in the image space, and a vision-based controller with guaranteed convergence properties. We also present experimental results obtained with a quadrotor equipped with an articulated gripper to illustrate both approaches.
Using aerial photography and image analysis to measure changes in giant reed populations
USDA-ARS?s Scientific Manuscript database
A study was conducted along the Rio Grande in southwest Texas to evaluate color-infrared aerial photography combined with supervised image analysis to quantify changes in giant reed (Arundo donax L.) populations over a 6-year period. Aerial photographs from 2002 and 2008 of the same seven study site...
NASA Astrophysics Data System (ADS)
Hlotov, Volodymyr; Hunina, Alla; Siejka, Zbigniew
2017-06-01
The main purpose of this work is to confirm the possibility of making largescale orthophotomaps applying unmanned aerial vehicle (UAV) Trimble- UX5. A planned altitude reference of the studying territory was carried out before to the aerial surveying. The studying territory has been marked with distinctive checkpoints in the form of triangles (0.5 × 0.5 × 0.2 m). The checkpoints used to precise the accuracy of orthophotomap have been marked with similar triangles. To determine marked reference point coordinates and check-points method of GNSS in real-time kinematics (RTK) measuring has been applied. Projecting of aerial surveying has been done with the help of installed Trimble Access Aerial Imaging, having been used to run out the UX5. Aerial survey out of the Trimble UX5 UAV has been done with the help of the digital camera SONY NEX-5R from 200m and 300 m altitude. These aerial surveying data have been calculated applying special photogrammetric software Pix 4D. The orthophotomap of the surveying objects has been made with its help. To determine the precise accuracy of the got results of aerial surveying the checkpoint coordinates according to the orthophotomap have been set. The average square error has been calculated according to the set coordinates applying GNSS measurements. A-priori accuracy estimation of spatial coordinates of the studying territory using the aerial surveying data have been calculated: mx=0.11 m, my=0.15 m, mz=0.23 m in the village of Remeniv and mx=0.26 m, my=0.38 m, mz=0.43 m in the town of Vynnyky. The accuracy of determining checkpoint coordinates has been investigated using images obtained out of UAV and the average square error of the reference points. Based on comparative analysis of the got results of the accuracy estimation of the made orthophotomap it can be concluded that the value the average square error does not exceed a-priori accuracy estimation. The possibility of applying Trimble UX5 UAV for making large-scale orthophotomaps has been investigated. The aerial surveying output data using UAV can be applied for monitoring potentially dangerous for people objects, the state border controlling, checking out the plots of settlements. Thus, it is important to control the accuracy the got results. Having based on the done analysis and experimental researches it can be concluded that applying UAV gives the possibility to find data more efficiently in comparison with the land surveying methods. As the result, the Trimble UX5 UAV gives the possibility to survey built-up territories with the required accuracy for making orthophotomaps with the following scales 1: 2000, 1: 1000, 1: 500.
NASA Technical Reports Server (NTRS)
Wallace, R. E.
1969-01-01
Nine-frame multiband aerial photography of a sample area 4500 feet on a side was processed to enhance spectral contrasts. The area concerned is in the Carrizo Plain, 45 miles west of Bakersfield, California, in sec. 29, T 31 S., R. 21 E., as shown on the Panorama Hills quadrangle topographic map published by the U. S. Geological Survey. The accompany illustrations include an index map showing the location of the Carrizo Plain area; a geologic map of the area based on field studies and examination of black and white aerial photographs; an enhanced multiband aerial photograph; an Aero Ektachrome photograph; black and white aerial photographs; and infrared image in the 8-13 micron band.
NASA Astrophysics Data System (ADS)
Rothmund, Sabrina; Niethammer, Uwe; Walter, Marco; Joswig, Manfred
2013-04-01
In recent years, the high-resolution and multi-temporal 3D mapping of the Earth's surface using terrestrial laser scanning (TLS), ground-based optical images and especially low-cost UAV-based aerial images (Unmanned Aerial Vehicle) has grown in importance. This development resulted from the progressive technical improvement of the imaging systems and the freely available multi-view stereo (MVS) software packages. These different methods of data acquisition for the generation of accurate, high-resolution digital surface models (DSMs) were applied as part of an eight-week field campaign at the Super-Sauze landslide (South French Alps). An area of approximately 10,000 m² with long-term average displacement rates greater than 0.01 m/day has been investigated. The TLS-based point clouds were acquired at different viewpoints with an average point spacing between 10 to 40 mm and at different dates. On these days, more than 50 optical images were taken on points along a predefined line on the side part of the landslide by a low-cost digital compact camera. Additionally, aerial images were taken by a radio-controlled mini quad-rotor UAV equipped with another low-cost digital compact camera. The flight altitude ranged between 20 m and 250 m and produced a corresponding ground resolution between 0.6 cm and 7 cm. DGPS measurements were carried out as well in order to geo-reference and validate the point cloud data. To generate unscaled photogrammetric 3D point clouds from a disordered and tilted image set, we use the widespread open-source software package Bundler and PMVS2 (University of Washington). These multi-temporal DSMs are required on the one hand to determine the three-dimensional surface deformations and on the other hand it will be required for differential correction for orthophoto production. Drawing on the example of the acquired data at the Super-Sauze landslide, we demonstrate the potential but also the limitations of the photogrammetric point clouds. To determine the quality of the photogrammetric point cloud, these point clouds are compared with the TLS-based DSMs. The comparison shows that photogrammetric points accuracies are in the range of cm to dm, therefore don't reach the quality of the high-resolution TLS-based DSMs. Further, the validation of the photogrammetric point clouds reveals that some of them have internal curvature effects. The advantage of the photogrammetric 3D data acquisition is the use of low-cost equipment and less time-consuming data collection in the field. While the accuracy of the photogrammetric point clouds is not as high as TLS-based DSMs, the advantages of the former method are seen when applied in areas where dm-range is sufficient.
ERIC Educational Resources Information Center
Williams, Richard S., Jr.; Kover, Allan W.
1978-01-01
The steady growth of the Landsat image data base continues to make this kind of remotely sensed data second only to aerial photographs in use by geoscientists who employ image data in their research. Article reviews data uses, meetings and symposia, publications, problems, and future trends. (Author/MA)
Uav Borne Low Altitude Photogrammetry System
NASA Astrophysics Data System (ADS)
Lin, Z.; Su, G.; Xie, F.
2012-07-01
In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.
NASA Astrophysics Data System (ADS)
El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno
2015-10-01
This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The infrastructure analysis mode pursues the goal to analyze the function of the infrastructure. The image analyst extracts visually certain target object signatures, assigns them to corresponding object features and is finally able to recognize the object type. The system offers him the possibility to assign the image signatures to features given by sample images. The underlying data set contains a wide range of objects features and object types for different domains like ships or land vehicles. Each domain has its own feature tree developed by aerial image analyst experts. By selecting the corresponding features, the possible solution set of objects is automatically reduced and matches only the objects that contain the selected features. Moreover, we give an outlook of current research in the field of ground target analysis in which we deal with partly automated methods to extract image signatures and assign them to the corresponding features. This research includes methods for automatically determining the orientation of an object and geometric features like width and length of the object. This step enables to reduce automatically the possible object types offered to the image analyst by the interactive recognition assistance system.
Small unmanned aerial vehicles (micro-UAVs, drones) in plant ecology1
Cruzan, Mitchell B.; Weinstein, Ben G.; Grasty, Monica R.; Kohrn, Brendan F.; Hendrickson, Elizabeth C.; Arredondo, Tina M.; Thompson, Pamela G.
2016-01-01
Premise of the study: Low-elevation surveys with small aerial drones (micro–unmanned aerial vehicles [UAVs]) may be used for a wide variety of applications in plant ecology, including mapping vegetation over small- to medium-sized regions. We provide an overview of methods and procedures for conducting surveys and illustrate some of these applications. Methods: Aerial images were obtained by flying a small drone along transects over the area of interest. Images were used to create a composite image (orthomosaic) and a digital surface model (DSM). Vegetation classification was conducted manually and using an automated routine. Coverage of an individual species was estimated from aerial images. Results: We created a vegetation map for the entire region from the orthomosaic and DSM, and mapped the density of one species. Comparison of our manual and automated habitat classification confirmed that our mapping methods were accurate. A species with high contrast to the background matrix allowed adequate estimate of its coverage. Discussion: The example surveys demonstrate that small aerial drones are capable of gathering large amounts of information on the distribution of vegetation and individual species with minimal impact to sensitive habitats. Low-elevation aerial surveys have potential for a wide range of applications in plant ecology. PMID:27672518
Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method
Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter
2015-01-01
An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254
D City Transformations by Time Series of Aerial Images
NASA Astrophysics Data System (ADS)
Adami, A.
2015-02-01
Recent photogrammetric applications, based on dense image matching algorithms, allow to use not only images acquired by digital cameras, amateur or not, but also to recover the vast heritage of analogue photographs. This possibility opens up many possibilities in the use and enhancement of existing photos heritage. The research of the original figuration of old buildings, the virtual reconstruction of disappeared architectures and the study of urban development are some of the application areas that exploit the great cultural heritage of photography. Nevertheless there are some restrictions in the use of historical images for automatic reconstruction of buildings such as image quality, availability of camera parameters and ineffective geometry of image acquisition. These constrains are very hard to solve and it is difficult to discover good dataset in the case of terrestrial close range photogrammetry for the above reasons. Even the photographic archives of museums and superintendence, while retaining a wealth of documentation, have no dataset for a dense image matching approach. Compared to the vast collection of historical photos, the class of aerial photos meets both criteria stated above. In this paper historical aerial photographs are used with dense image matching algorithms to realize 3d models of a city in different years. The models can be used to study the urban development of the city and its changes through time. The application relates to the city centre of Verona, for which some time series of aerial photographs have been retrieved. The models obtained in this way allowed, right away, to observe the urban development of the city, the places of expansion and new urban areas. But a more interesting aspect emerged from the analytical comparison between models. The difference, as the Euclidean distance, between two models gives information about new buildings or demolitions. As considering accuracy it is necessary point out that the quality of final observations from model comparison depends on several aspects such as image quality, image scale and marker accuracy from cartography.
Earth mapping - aerial or satellite imagery comparative analysis
NASA Astrophysics Data System (ADS)
Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo
Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.
NASA Astrophysics Data System (ADS)
Leberl, F.; Gruber, M.; Ponticelli, M.; Wiechert, A.
2012-07-01
The UltraCam-project created a novel Large Format Digital Aerial Camera. It was inspired by the ISPRS Congress 2000 in Amsterdam. The search for a promising imaging idea succeeded in May 2001, defining a tiling approach with multiple lenses and multiple area CCD arrays to assemble a seamless and geometrically stable monolithic photogrammetric aerial large format image. First resources were spent on the project in September 2011. The initial UltraCam-D was announced and demonstrated in May 2003. By now the imaging principle has resulted in a 4th generation UltraCam Eagle, increasing the original swath width from 11,500 pixels to beyond 20,000. Inspired by the original imaging principle, alternatives have been investigated, and the UltraCam-G carries the swath width even further, namely to a frame image with nearly 30,000 pixels, however, with a modified tiling concept and optimized for orthophoto production. We explain the advent of digital aerial large format imaging and how it benefits from improvements in computing technology to cope with data flows at a rate of 3 Gigabits per second and a need to deal with Terabytes of imagery within a single aerial sortie. We also address the many benefits of a transition to a fully digital workflow with a paradigm shift away from minimizing a project's number of aerial photographs and towards maximizing the automation of photogrammetric workflows by means of high redundancy imaging strategies. The instant gratification from near-real-time aerial triangulations and dense image matching has led to a reassessment of the value of photogrammetric point clouds to successfully compete with direct point cloud measurements by LiDAR.
BgCut: automatic ship detection from UAV images.
Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.
BgCut: Automatic Ship Detection from UAV Images
Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182
Model-Based Building Detection from Low-Cost Optical Sensors Onboard Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Karantzalos, K.; Koutsourakis, P.; Kalisperakis, I.; Grammatikopoulos, L.
2015-08-01
The automated and cost-effective building detection in ultra high spatial resolution is of major importance for various engineering and smart city applications. To this end, in this paper, a model-based building detection technique has been developed able to extract and reconstruct buildings from UAV aerial imagery and low-cost imaging sensors. In particular, the developed approach through advanced structure from motion, bundle adjustment and dense image matching computes a DSM and a true orthomosaic from the numerous GoPro images which are characterised by important geometric distortions and fish-eye effect. An unsupervised multi-region, graphcut segmentation and a rule-based classification is responsible for delivering the initial multi-class classification map. The DTM is then calculated based on inpaininting and mathematical morphology process. A data fusion process between the detected building from the DSM/DTM and the classification map feeds a grammar-based building reconstruction and scene building are extracted and reconstructed. Preliminary experimental results appear quite promising with the quantitative evaluation indicating detection rates at object level of 88% regarding the correctness and above 75% regarding the detection completeness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Jiangye
Up-to-date maps of installed solar photovoltaic panels are a critical input for policy and financial assessment of solar distributed generation. However, such maps for large areas are not available. With high coverage and low cost, aerial images enable large-scale mapping, bit it is highly difficult to automatically identify solar panels from images, which are small objects with varying appearances dispersed in complex scenes. We introduce a new approach based on deep convolutional networks, which effectively learns to delineate solar panels in aerial scenes. The approach has successfully mapped solar panels in imagery covering 200 square kilometers in two cities, usingmore » only 12 square kilometers of training data that are manually labeled.« less
NASA Astrophysics Data System (ADS)
Ruf, B.; Erdnuess, B.; Weinmann, M.
2017-08-01
With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.
NASA Astrophysics Data System (ADS)
Movia, A.; Beinat, A.; Crosilla, F.
2015-04-01
The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.
NASA Astrophysics Data System (ADS)
Peppa, M. V.; Mills, J. P.; Fieber, K. D.; Haynes, I.; Turner, S.; Turner, A.; Douglas, M.; Bryan, P. G.
2018-05-01
Understanding and protecting cultural heritage involves the detection and long-term documentation of archaeological remains alongside the spatio-temporal analysis of their landscape evolution. Archive aerial photography can illuminate traces of ancient features which typically appear with different brightness values from their surrounding environment, but are not always well defined. This research investigates the implementation of the Structure-from-Motion - Multi-View Stereo image matching approach with an image enhancement algorithm to derive three epochs of orthomosaics and digital surface models from visible and near infrared historic aerial photography. The enhancement algorithm uses decorrelation stretching to improve the contrast of the orthomosaics so as archaeological features are better detected. Results include 2D / 3D locations of detected archaeological traces stored into a geodatabase for further archaeological interpretation and correlation with benchmark observations. The study also discusses the merits and difficulties of the process involved. This research is based on a European-wide project, entitled "Cultural Heritage Through Time", and the case study research was carried out as a component of the project in the UK.
Neural-network classifiers for automatic real-world aerial image recognition
NASA Astrophysics Data System (ADS)
Greenberg, Shlomo; Guterman, Hugo
1996-08-01
We describe the application of the multilayer perceptron (MLP) network and a version of the adaptive resonance theory version 2-A (ART 2-A) network to the problem of automatic aerial image recognition (AAIR). The classification of aerial images, independent of their positions and orientations, is required for automatic tracking and target recognition. Invariance is achieved by the use of different invariant feature spaces in combination with supervised and unsupervised neural networks. The performance of neural-network-based classifiers in conjunction with several types of invariant AAIR global features, such as the Fourier-transform space, Zernike moments, central moments, and polar transforms, are examined. The advantages of this approach are discussed. The performance of the MLP network is compared with that of a classical correlator. The MLP neural-network correlator outperformed the binary phase-only filter (BPOF) correlator. It was found that the ART 2-A distinguished itself with its speed and its low number of required training vectors. However, only the MLP classifier was able to deal with a combination of shift and rotation geometric distortions.
Neural-network classifiers for automatic real-world aerial image recognition.
Greenberg, S; Guterman, H
1996-08-10
We describe the application of the multilayer perceptron (MLP) network and a version of the adaptive resonance theory version 2-A (ART 2-A) network to the problem of automatic aerial image recognition (AAIR). The classification of aerial images, independent of their positions and orientations, is required for automatic tracking and target recognition. Invariance is achieved by the use of different invariant feature spaces in combination with supervised and unsupervised neural networks. The performance of neural-network-based classifiers in conjunction with several types of invariant AAIR global features, such as the Fourier-transform space, Zernike moments, central moments, and polar transforms, are examined. The advantages of this approach are discussed. The performance of the MLP network is compared with that of a classical correlator. The MLP neural-network correlator outperformed the binary phase-only filter (BPOF) correlator. It was found that the ART 2-A distinguished itself with its speed and its low number of required training vectors. However, only the MLP classifier was able to deal with a combination of shift and rotation geometric distortions.
NASA Astrophysics Data System (ADS)
Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.
2016-06-01
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
HISTORIC IMAGE: AERIAL VIEW OF THE CEMETERY AND ITS ENVIRONS. ...
HISTORIC IMAGE: AERIAL VIEW OF THE CEMETERY AND ITS ENVIRONS. PHOTOGRAPH TAKEN ON 18 MAY 1948. NCA HISTORY COLLECTION. - Knoxville National Cemetery, 939 Tyson Street, Northwest, Knoxville, Knox County, TN
Cooperative Surveillance and Pursuit Using Unmanned Aerial Vehicles and Unattended Ground Sensors
Las Fargeas, Jonathan; Kabamba, Pierre; Girard, Anouck
2015-01-01
This paper considers the problem of path planning for a team of unmanned aerial vehicles performing surveillance near a friendly base. The unmanned aerial vehicles do not possess sensors with automated target recognition capability and, thus, rely on communicating with unattended ground sensors placed on roads to detect and image potential intruders. The problem is motivated by persistent intelligence, surveillance, reconnaissance and base defense missions. The problem is formulated and shown to be intractable. A heuristic algorithm to coordinate the unmanned aerial vehicles during surveillance and pursuit is presented. Revisit deadlines are used to schedule the vehicles' paths nominally. The algorithm uses detections from the sensors to predict intruders' locations and selects the vehicles' paths by minimizing a linear combination of missed deadlines and the probability of not intercepting intruders. An analysis of the algorithm's completeness and complexity is then provided. The effectiveness of the heuristic is illustrated through simulations in a variety of scenarios. PMID:25591168
NASA Astrophysics Data System (ADS)
Ito, Shusei; Uchida, Keitaro; Mizushina, Haruki; Suyama, Shiro; Yamamoto, Hirotsugu
2017-02-01
Security is one of the big issues in automated teller machine (ATM). In ATM, two types of security have to be maintained. One is to secure displayed information. The other is to secure screen contamination. This paper gives a solution for these two security issues. In order to secure information against peeping at the screen, we utilize visual cryptography for displayed information and limit the viewing zone. Furthermore, an aerial information screen with aerial imaging by retro-reflection, named AIRR enables users to avoid direct touch on the information screen. The purpose of this paper is to propose an aerial secure display technique that ensures security of displayed information as well as security against contamination problem on screen touch. We have developed a polarization-processing display that is composed of a backlight, a polarizer, a background LCD panel, a gap, a half-wave retarder, and a foreground LCD panel. Polarization angle is rotated with the LCD panels. We have constructed a polarization encryption code set. Size of displayed images are designed to limit the viewing position. Furthermore, this polarization-processing display has been introduced into our aerial imaging optics, which employs a reflective polarizer and a retro-reflector covered with a quarter-wave retarder. Polarization-modulated light forms the real image over the reflective polarizer. We have successfully formed aerial information screen that shows the secret image with a limited viewing position. This is the first realization of aerial secure display by use of polarization-processing display with retarder-film and retro-reflector.
EUV phase-shifting masks and aberration monitors
NASA Astrophysics Data System (ADS)
Deng, Yunfei; Neureuther, Andrew R.
2002-07-01
Rigorous electromagnetic simulation with TEMPEST is used to examine the use of phase-shifting masks in EUV lithography. The effects of oblique incident illumination and mask patterning by ion-mixing of multilayers are analyzed. Oblique incident illumination causes streamers at absorber edges and causes position shifting in aerial images. The diffraction waves between ion-mixed and pristine multilayers are observed. The phase-shifting caused by stepped substrates is simulated and images show that it succeeds in creation of phase-shifting effects. The diffraction process at the phase boundary is also analyzed. As an example of EUV phase-shifting masks, a coma pattern and probe based aberration monitor is simulated and aerial images are formed under different levels of coma aberration. The probe signal rises quickly as coma increases as designed.
UAV-based Natural Hazard Management in High-Alpine Terrain - Case Studies from Austria
NASA Astrophysics Data System (ADS)
Sotier, Bernadette; Adams, Marc; Lechner, Veronika
2015-04-01
Unmanned Aerial Vehicles (UAV) have become a standard tool for geodata collection, as they allow conducting on-demand mapping missions in a flexible, cost-effective manner at an unprecedented level of detail. Easy-to-use, high-performance image matching software make it possible to process the collected aerial images to orthophotos and 3D-terrain models. Such up-to-date geodata have proven to be an important asset in natural hazard management: Processes like debris flows, avalanches, landslides, fluvial erosion and rock-fall can be detected and quantified; damages can be documented and evaluated. In the Alps, these processes mostly originate in remote areas, which are difficult and hazardous to access, thus presenting a challenging task for RPAS data collection. In particular, the problems include finding suitable landing and piloting-places, dealing with bad or no GPS-signals and the installation of ground control points (GCP) for georeferencing. At the BFW, RPAS have been used since 2012 to aid natural hazard management of various processes, of which three case studies are presented below. The first case study deals with the results from an attempt to employ UAV-based multi-spectral remote sensing to monitor the state of natural hazard protection forests. Images in the visible and near-infrared (NIR) band were collected using modified low-cost cameras, combined with different optical filters. Several UAV-flights were performed in the 72 ha large study site in 2014, which lies in the Wattental, Tyrol (Austria) between 1700 and 2050 m a.s.l., where the main tree species are stone pine and mountain pine. The matched aerial images were analysed using different UAV-specific vitality indices, evaluating both single- and dual-camera UAV-missions. To calculate the mass balance of a debris flow in the Tyrolean Halltal (Austria), an RPAS flight was conducted in autumn 2012. The extreme alpine environment was challenging for both the mission and the evaluation of the aerial images: In the upper part of the steep channel there was no GPS-signal available, because of the high surrounding rock faces, the landing area consisted of coarse gravel. Therefore, only a manual flight with a high risk of damage was possible. With the calculated RPAS-based digital surface model, created from the 600 aerial images, a chronologically resolved back-calculation of the last big debris-flow event could be performed. In a third case study, aerial images from RPAS were used for a similar investigation in Virgen, Eastern Tyrol (Austria). A debris flow in the Firschnitzbach catchment caused severe damages to the village of Virgen in August 2012. An RPAS-flight was performed, in order to refine the estimated displaced debris mass for assessment purposes. The upper catchment of the Firschnitzbach is situated above the timberline and covers an area of 6.5 ha at a height difference of 1000 m. Therefore, three separate flights were necessary to achieve a sufficient image overlap. The central part of the Firschnitzbach consists of a steep and partly dense forested canyon / gorge, so there was no flight possible for this section up to now. The evaluation of the surface model from the images showed, that only half of the estimated debris mass came from the upper part of the catchment.
Ofli, Ferda; Meier, Patrick; Imran, Muhammad; Castillo, Carlos; Tuia, Devis; Rey, Nicolas; Briant, Julien; Millet, Pauline; Reinhard, Friedrich; Parkan, Matthew; Joost, Stéphane
2016-03-01
Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The results suggest that the platform we have developed to combine crowdsourcing and machine learning to make sense of large volumes of aerial images can be used for disaster response.
A low-cost dual-camera imaging system for aerial applicators
USDA-ARS?s Scientific Manuscript database
Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...
USDA-ARS?s Scientific Manuscript database
As remote sensing and variable rate technology are becoming more available for aerial applicators, practical methodologies on effective integration of these technologies are needed for site-specific aerial applications of crop production and protection materials. The objectives of this study were to...
Looking for an old aerial photograph
,
1997-01-01
Attempts to photograph the surface of the Earth date from the 1800's, when photographers attached cameras to balloons, kites, and even pigeons. Today, aerial photographs and satellite images are commonplace. The rate of acquiring aerial photographs and satellite images has increased rapidly in recent years. Views of the Earth obtained from aircraft or satellites have become valuable tools to Government resource planners and managers, land-use experts, environmentalists, engineers, scientists, and a wide variety of other users. Many people want historical aerial photographs for business or personal reasons. They may want to locate the boundaries of an old farm or a piece of family property. Or they may want a photograph as a record of changes in their neighborhood, or as a gift. The U.S. Geological Survey (USGS) maintains the Earth Science Information Centers (ESIC?s) to sell aerial photographs, remotely sensed images from satellites, a wide array of digital geographic and cartographic data, as well as the Bureau?s wellknown maps. Declassified photographs from early spy satellites were recently added to the ESIC offerings of historical images. Using the Aerial Photography Summary Record System database, ESIC researchers can help customers find imagery in the collections of other Federal agencies and, in some cases, those of private companies that specialize in esoteric products.
Web-based data delivery services in support of disaster-relief applications
Jones, Brenda K.; Risty, Ron R.; Buswell, M.
2003-01-01
The U.S. Geological Survey Earth Resources Observation Systems Data Center responds to emergencies in support of various government agencies for human-induced and natural disasters. This response consists of satellite tasking and acquisitions, satellite image registrations, disaster-extent maps analysis and creation, base image provision and support, Web-based mapping services for product delivery, and predisaster and postdisaster data archiving. The emergency response staff are on call 24 hours a day, 7 days a week, and have access to many commercial and government satellite and aerial photography tasking authorities. They have access to value-added data processing and photographic laboratory services for off-hour emergency requests. They work with various Federal agencies for preparedness planning, which includes providing base imagery. These data may include digital elevation models, hydrographic models, base satellite images, vector data layers such as roads, aerial photographs, and other predisaster data. These layers are incorporated into a Web-based browser and data delivery service that is accessible either to the general public or to select customers. As usage declines, the data are moved to a postdisaster nearline archive that is still accessible, but not in real time.
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
Aerial photography flight quality assessment with GPS/INS and DEM data
NASA Astrophysics Data System (ADS)
Zhao, Haitao; Zhang, Bing; Shang, Jiali; Liu, Jiangui; Li, Dong; Chen, Yanyan; Zuo, Zhengli; Chen, Zhengchao
2018-01-01
The flight altitude, ground coverage, photo overlap, and other acquisition specifications of an aerial photography flight mission directly affect the quality and accuracy of the subsequent mapping tasks. To ensure smooth post-flight data processing and fulfill the pre-defined mapping accuracy, flight quality assessments should be carried out in time. This paper presents a novel and rigorous approach for flight quality evaluation of frame cameras with GPS/INS data and DEM, using geometric calculation rather than image analysis as in the conventional methods. This new approach is based mainly on the collinearity equations, in which the accuracy of a set of flight quality indicators is derived through a rigorous error propagation model and validated with scenario data. Theoretical analysis and practical flight test of an aerial photography mission using an UltraCamXp camera showed that the calculated photo overlap is accurate enough for flight quality assessment of 5 cm ground sample distance image, using the SRTMGL3 DEM and the POSAV510 GPS/INS data. An even better overlap accuracy could be achieved for coarser-resolution aerial photography. With this new approach, the flight quality evaluation can be conducted on site right after landing, providing accurate and timely information for decision making.
Tang, Tianyu; Zhou, Shilin; Deng, Zhipeng; Zou, Huanxin; Lei, Lin
2017-02-10
Detecting vehicles in aerial imagery plays an important role in a wide range of applications. The current vehicle detection methods are mostly based on sliding-window search and handcrafted or shallow-learning-based features, having limited description capability and heavy computational costs. Recently, due to the powerful feature representations, region convolutional neural networks (CNN) based detection methods have achieved state-of-the-art performance in computer vision, especially Faster R-CNN. However, directly using it for vehicle detection in aerial images has many limitations: (1) region proposal network (RPN) in Faster R-CNN has poor performance for accurately locating small-sized vehicles, due to the relatively coarse feature maps; and (2) the classifier after RPN cannot distinguish vehicles and complex backgrounds well. In this study, an improved detection method based on Faster R-CNN is proposed in order to accomplish the two challenges mentioned above. Firstly, to improve the recall, we employ a hyper region proposal network (HRPN) to extract vehicle-like targets with a combination of hierarchical feature maps. Then, we replace the classifier after RPN by a cascade of boosted classifiers to verify the candidate regions, aiming at reducing false detection by negative example mining. We evaluate our method on the Munich vehicle dataset and the collected vehicle dataset, with improvements in accuracy and robustness compared to existing methods.
USDA-ARS?s Scientific Manuscript database
With the rapid development of small imaging sensors and unmanned aerial vehicles (UAVs), remote sensing is undergoing a revolution with greatly increased spatial and temporal resolutions. While more relevant detail becomes available, it is a challenge to analyze the large number of images to extract...
NASA Astrophysics Data System (ADS)
Akinin, M. V.; Akinina, N. V.; Klochkov, A. Y.; Nikiforov, M. B.; Sokolova, A. V.
2015-05-01
The report reviewed the algorithm fuzzy c-means, performs image segmentation, give an estimate of the quality of his work on the criterion of Xie-Beni, contain the results of experimental studies of the algorithm in the context of solving the problem of drawing up detailed two-dimensional maps with the use of unmanned aerial vehicles. According to the results of the experiment concluded that the possibility of applying the algorithm in problems of decoding images obtained as a result of aerial photography. The considered algorithm can significantly break the original image into a plurality of segments (clusters) in a relatively short period of time, which is achieved by modification of the original k-means algorithm to work in a fuzzy task.
NASA Astrophysics Data System (ADS)
Jiao, Q. S.; Luo, Y.; Shen, W. H.; Li, Q.; Wang, X.
2018-04-01
Jiuzhaigou earthquake led to the collapse of the mountains and formed lots of landslides in Jiuzhaigou scenic spot and surrounding roads which caused road blockage and serious ecological damage. Due to the urgency of the rescue, the authors carried unmanned aerial vehicle (UAV) and entered the disaster area as early as August 9 to obtain the aerial images near the epicenter. On the basis of summarizing the earthquake landslides characteristics in aerial images, by using the object-oriented analysis method, landslides image objects were obtained by multi-scale segmentation, and the feature rule set of each level was automatically built by SEaTH (Separability and Thresholds) algorithm to realize the rapid landslide extraction. Compared with visual interpretation, object-oriented automatic landslides extraction method achieved an accuracy of 94.3 %. The spatial distribution of the earthquake landslide had a significant positive correlation with slope and relief and had a negative correlation with the roughness, but no obvious correlation with the aspect. The relationship between the landslide and the aspect was not found and the probable reason may be that the distance between the study area and the seismogenic fault was too far away. This work provided technical support for the earthquake field emergency, earthquake landslide prediction and disaster loss assessment.
SEM AutoAnalysis: enhancing photomask and NIL defect disposition and review
NASA Astrophysics Data System (ADS)
Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Ehrlich, Christian; Garetto, Anthony
2017-06-01
For defect disposition and repair verification regarding printability, AIMS™ is the state of the art measurement tool in industry. With its unique capability of capturing aerial images of photomasks it is the one method that comes closest to emulating the printing behaviour of a scanner. However for nanoimprint lithography (NIL) templates aerial images cannot be applied to evaluate the success of a repair process. Hence, for NIL defect dispositioning scanning, electron microscopy (SEM) imaging is the method of choice. In addition, it has been a standard imaging method for further root cause analysis of defects and defect review on optical photomasks which enables 2D or even 3D mask profiling at high resolutions. In recent years a trend observed in mask shops has been the automation of processes that traditionally were driven by operators. This of course has brought many advantages one of which is freeing cost intensive labour from conducting repetitive and tedious work. Furthermore, it reduces variability in processes due to different operator skill and experience levels which at the end contributes to eliminating the human factor. Taking these factors into consideration, one of the software based solutions available under the FAVOR® brand to support customer needs is the aerial image evaluation software, AIMS™ AutoAnalysis (AAA). It provides fully automated analysis of AIMS™ images and runs in parallel to measurements. This is enabled by its direct connection and communication with the AIMS™tools. As one of many positive outcomes, generating automated result reports is facilitated, standardizing the mask manufacturing workflow. Today, AAA has been successfully introduced into production at multiple customers and is supporting the workflow as described above. These trends indeed have triggered the demand for similar automation with respect to SEM measurements leading to the development of SEM AutoAnalysis (SAA). It aims towards a fully automated SEM image evaluation process utilizing a completely different algorithm due to the different nature of SEM images and aerial images. Both AAA and SAA are the building blocks towards an image evaluation suite in the mask shop industry.
Intergraph video and images exploitation capabilities
NASA Astrophysics Data System (ADS)
Colla, Simone; Manesis, Charalampos
2013-08-01
The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.
Lingua, Andrea; Marenchino, Davide; Nex, Francesco
2009-01-01
In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.
Hybrid Automatic Building Interpretation System
NASA Astrophysics Data System (ADS)
Pakzad, K.; Klink, A.; Müterthies, A.; Gröger, G.; Stroh, V.; Plümer, L.
2011-09-01
HABIS (Hybrid Automatic Building Interpretation System) is a system for an automatic reconstruction of building roofs used in virtual 3D building models. Unlike most of the commercially available systems, HABIS is able to work to a high degree automatically. The hybrid method uses different sources intending to exploit the advantages of the particular sources. 3D point clouds usually provide good height and surface data, whereas spatial high resolution aerial images provide important information for edges and detail information for roof objects like dormers or chimneys. The cadastral data provide important basis information about the building ground plans. The approach used in HABIS works with a multi-stage-process, which starts with a coarse roof classification based on 3D point clouds. After that it continues with an image based verification of these predicted roofs. In a further step a final classification and adjustment of the roofs is done. In addition some roof objects like dormers and chimneys are also extracted based on aerial images and added to the models. In this paper the used methods are described and some results are presented.
Multi-Dimensional Signal Processing Research Program
1981-09-30
applications to real-time image processing and analysis. A specific long-range application is the automated processing of aerial reconnaissance imagery...Non-supervised image segmentation is a potentially im- portant operation in the automated processing of aerial reconnaissance pho- tographs since it
Mushinzimana, Emmanuel; Munga, Stephen; Minakawa, Noboru; Li, Li; Feng, Chen-Chieng; Bian, Ling; Kitron, Uriel; Schmidt, Cindy; Beck, Louisa; Zhou, Guofa; Githeko, Andrew K; Yan, Guiyun
2006-02-16
In the past two decades the east African highlands have experienced several major malaria epidemics. Currently there is a renewed interest in exploring the possibility of anopheline larval control through environmental management or larvicide as an additional means of reducing malaria transmission in Africa. This study examined the landscape determinants of anopheline mosquito larval habitats and usefulness of remote sensing in identifying these habitats in western Kenya highlands. Panchromatic aerial photos, Ikonos and Landsat Thematic Mapper 7 satellite images were acquired for a study area in Kakamega, western Kenya. Supervised classification of land-use and land-cover and visual identification of aquatic habitats were conducted. Ground survey of all aquatic habitats was conducted in the dry and rainy seasons in 2003. All habitats positive for anopheline larvae were identified. The retrieved data from the remote sensors were compared to the ground results on aquatic habitats and land-use. The probability of finding aquatic habitats and habitats with Anopheles larvae were modelled based on the digital elevation model and land-use types. The misclassification rate of land-cover types was 10.8% based on Ikonos imagery, 22.6% for panchromatic aerial photos and 39.2% for Landsat TM 7 imagery. The Ikonos image identified 40.6% of aquatic habitats, aerial photos identified 10.6%, and Landsate TM 7 image identified 0%. Computer models based on topographic features and land-cover information obtained from the Ikonos image yielded a misclassification rate of 20.3-22.7% for aquatic habitats, and 18.1-25.1% for anopheline-positive larval habitats. One-metre spatial resolution Ikonos images combined with computer modelling based on topographic land-cover features are useful tools for identification of anopheline larval habitats, and they can be used to assist to malaria vector control in western Kenya highlands.
Construction of an unmanned aerial vehicle remote sensing system for crop monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seungtaek; Ko, Jonghan; Kim, Mijeong; Kim, Jongkwon
2016-04-01
We constructed a lightweight unmanned aerial vehicle (UAV) remote sensing system and determined the ideal method for equipment setup, image acquisition, and image processing. Fields of rice paddy (Oryza sativa cv. Unkwang) grown under three different nitrogen (N) treatments of 0, 50, or 115 kg/ha were monitored at Chonnam National University, Gwangju, Republic of Korea, in 2013. A multispectral camera was used to acquire UAV images from the study site. Atmospheric correction of these images was completed using the empirical line method, and three-point (black, gray, and white) calibration boards were used as pseudo references. Evaluation of our corrected UAV-based remote sensing data revealed that correction efficiency and root mean square errors ranged from 0.77 to 0.95 and 0.01 to 0.05, respectively. The time series maps of simulated normalized difference vegetation index (NDVI) produced using the UAV images reproduced field variations of NDVI reasonably well, both within and between the different N treatments. We concluded that the UAV-based remote sensing technology utilized in this study is potentially an easy and simple way to quantitatively obtain reliable two-dimensional remote sensing information on crop growth.
NASA Astrophysics Data System (ADS)
Yahyanejad, Saeed; Rinner, Bernhard
2015-06-01
The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.
Applicability of New Approaches of Sensor Orientation to Micro Aerial Vehicles
NASA Astrophysics Data System (ADS)
Rehak, M.; Skaloud, J.
2016-06-01
This study highlights the benefits of precise aerial position and attitude control in the context of mapping with Micro Aerial Vehicles (MAVs). Accurate mapping with MAVs is gaining importance in applications such as corridor mapping, road and pipeline inspections or mapping of large areas with homogeneous surface structure, e.g. forests or agricultural fields. There, accurate aerial control plays a major role in successful terrain reconstruction and artifact-free ortophoto generation. The presented experiments focus on new approaches of aerial control. We confirm practically that the relative aerial position and attitude control can improve accuracy in difficult mapping scenarios. Indeed, the relative orientation method represents an attractive alternative in the context of MAVs for two reasons. First, the procedure is somewhat simplified, e.g. the angular misalignment, so called boresight, between the camera and the inertial measurement unit (IMU) does not have to be determined and, second, the effect of possible systematic errors in satellite positioning (e.g. due to multipath and/or incorrect recovery of differential carrier-phase ambiguities) is mitigated. First, we present a typical mapping project over an agricultural field and second, we perform a corridor road mapping. We evaluate the proposed methods in scenarios with and without automated image observations. We investigate a recently proposed concept where adjustment is performed using image observations limited to ground control and check points, so called fast aerial triangulation (Fast AT). In this context we show that accurate aerial control (absolute or relative) together with a few image observations can deliver accurate results comparable to classical aerial triangulation with thousands of image measurements. This procedure in turns reduces the demands on processing time and the requirements on the existence of surface texture. Finally, we compare the above mentioned procedures with direct sensor orientation (DiSO) to show its potential for rapid mapping.
Semantic labeling of high-resolution aerial images using an ensemble of fully convolutional networks
NASA Astrophysics Data System (ADS)
Sun, Xiaofeng; Shen, Shuhan; Lin, Xiangguo; Hu, Zhanyi
2017-10-01
High-resolution remote sensing data classification has been a challenging and promising research topic in the community of remote sensing. In recent years, with the rapid advances of deep learning, remarkable progress has been made in this field, which facilitates a transition from hand-crafted features designing to an automatic end-to-end learning. A deep fully convolutional networks (FCNs) based ensemble learning method is proposed to label the high-resolution aerial images. To fully tap the potentials of FCNs, both the Visual Geometry Group network and a deeper residual network, ResNet, are employed. Furthermore, to enlarge training samples with diversity and gain better generalization, in addition to the commonly used data augmentation methods (e.g., rotation, multiscale, and aspect ratio) in the literature, aerial images from other datasets are also collected for cross-scene learning. Finally, we combine these learned models to form an effective FCN ensemble and refine the results using a fully connected conditional random field graph model. Experiments on the ISPRS 2-D Semantic Labeling Contest dataset show that our proposed end-to-end classification method achieves an overall accuracy of 90.7%, a state-of-the-art in the field.
Kefauver, Shawn C; Vicente, Rubén; Vergara-Díaz, Omar; Fernandez-Gallego, Jose A; Kerfal, Samir; Lopez, Antonio; Melichar, James P E; Serret Molins, María D; Araus, José L
2017-01-01
With the commercialization and increasing availability of Unmanned Aerial Vehicles (UAVs) multiple rotor copters have expanded rapidly in plant phenotyping studies with their ability to provide clear, high resolution images. As such, the traditional bottleneck of plant phenotyping has shifted from data collection to data processing. Fortunately, the necessarily controlled and repetitive design of plant phenotyping allows for the development of semi-automatic computer processing tools that may sufficiently reduce the time spent in data extraction. Here we present a comparison of UAV and field based high throughput plant phenotyping (HTPP) using the free, open-source image analysis software FIJI (Fiji is just ImageJ) using RGB (conventional digital cameras), multispectral and thermal aerial imagery in combination with a matching suite of ground sensors in a study of two hybrids and one conventional barely variety with ten different nitrogen treatments, combining different fertilization levels and application schedules. A detailed correlation network for physiological traits and exploration of the data comparing between treatments and varieties provided insights into crop performance under different management scenarios. Multivariate regression models explained 77.8, 71.6, and 82.7% of the variance in yield from aerial, ground, and combined data sets, respectively.
NASA Astrophysics Data System (ADS)
Anwer, Rao Muhammad; Khan, Fahad Shahbaz; van de Weijer, Joost; Molinier, Matthieu; Laaksonen, Jorma
2018-04-01
Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-01-01
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-12-25
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.
Huang, Rongyong; Zheng, Shunyi; Hu, Kun
2018-06-01
Registration of large-scale optical images with airborne LiDAR data is the basis of the integration of photogrammetry and LiDAR. However, geometric misalignments still exist between some aerial optical images and airborne LiDAR point clouds. To eliminate such misalignments, we extended a method for registering close-range optical images with terrestrial LiDAR data to a variety of large-scale aerial optical images and airborne LiDAR data. The fundamental principle is to minimize the distances from the photogrammetric matching points to the terrestrial LiDAR data surface. Except for the satisfactory efficiency of about 79 s per 6732 × 8984 image, the experimental results also show that the unit weighted root mean square (RMS) of the image points is able to reach a sub-pixel level (0.45 to 0.62 pixel), and the actual horizontal and vertical accuracy can be greatly improved to a high level of 1/4⁻1/2 (0.17⁻0.27 m) and 1/8⁻1/4 (0.10⁻0.15 m) of the average LiDAR point distance respectively. Finally, the method is proved to be more accurate, feasible, efficient, and practical in variety of large-scale aerial optical image and LiDAR data.
Corn and sorghum phenotyping using a fixed-wing UAV-based remote sensing system
NASA Astrophysics Data System (ADS)
Shi, Yeyin; Murray, Seth C.; Rooney, William L.; Valasek, John; Olsenholler, Jeff; Pugh, N. Ace; Henrickson, James; Bowden, Ezekiel; Zhang, Dongyan; Thomasson, J. Alex
2016-05-01
Recent development of unmanned aerial systems has created opportunities in automation of field-based high-throughput phenotyping by lowering flight operational cost and complexity and allowing flexible re-visit time and higher image resolution than satellite or manned airborne remote sensing. In this study, flights were conducted over corn and sorghum breeding trials in College Station, Texas, with a fixed-wing unmanned aerial vehicle (UAV) carrying two multispectral cameras and a high-resolution digital camera. The objectives were to establish the workflow and investigate the ability of UAV-based remote sensing for automating data collection of plant traits to develop genetic and physiological models. Most important among these traits were plant height and number of plants which are currently manually collected with high labor costs. Vegetation indices were calculated for each breeding cultivar from mosaicked and radiometrically calibrated multi-band imagery in order to be correlated with ground-measured plant heights, populations and yield across high genetic-diversity breeding cultivars. Growth curves were profiled with the aerial measured time-series height and vegetation index data. The next step of this study will be to investigate the correlations between aerial measurements and ground truth measured manually in field and from lab tests.
A Low-Cost Imaging System for Aerial Applicators
USDA-ARS?s Scientific Manuscript database
Agricultural aircraft provide a readily available and versatile platform for airborne remote sensing. Although various airborne imaging systems are being used for research and commercial applications, most of these systems are either too expensive or too complex to be of practical use for aerial app...
HISTORIC IMAGE: AERIAL VIEW OF THE CEMETERY AND ITS ENVIRONS. ...
HISTORIC IMAGE: AERIAL VIEW OF THE CEMETERY AND ITS ENVIRONS. PHOTOGRAPH TAKEN ON 6 APRIL 1968. NCA HISTORY COLLECTION. - Rock Island National Cemetery, Rock Island Arsenal, 0.25 mile north of southern tip of Rock Island, Rock Island, Rock Island County, IL
Evaluation of remote sensing aerial systems in existing transportation practices, phase II.
DOT National Transportation Integrated Search
2011-06-01
A low-cost aerial platform represents a flexible tool for acquiring high-resolution images for ground areas of interest. The geo-referencing of objects within these images could benefit civil engineers in a variety of research areas including, but no...
Lessons learned in historical mapping of conifer and oak in the North Coast
Melissa V. Eitzel; Maggi Kelly; Lenya N. Quinn-Davidson
2015-01-01
Conifer encroachment into oak woodlands is becoming a pressing concern for oak conservation, particularly in California's north coast. We use Object-Based Image Analysis (OBIA) with historical aerial imagery from 1948 and recent high-spatial-resolution images from 2009 to explore the potential for mapping encroachment using remote sensing. We find that pre-...
NASA Astrophysics Data System (ADS)
Babayan, Pavel; Smirnov, Sergey; Strotov, Valery
2017-10-01
This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Condenser optics, partial coherence, and imaging for soft-x-ray projection lithography.
Sommargren, G E; Seppala, L G
1993-12-01
A condenser system couples the radiation source to an imaging system, controlling the uniformity and partial coherence at the object, which ultimately affects the characteristics of the aerial image. A soft-x-ray projection lithography system based on a ring-field imaging system and a laser-produced plasma x-ray source places considerable constraints on the design of a condenser system. Two designs are proposed, critical illumination and Köhler illumination, each of which requires three mirrors and scanning for covering the entire ring field with the required uniformity and partial coherence. Images based on Hopkins' formulation of partially coherent imaging are simulated.
NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention
[[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of
NASA Astrophysics Data System (ADS)
Rau, J.-Y.; Jhan, J.-P.; Huang, C.-Y.
2015-08-01
Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a modified projective transformation (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at the same exposure time will have same interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) after band-to-band registration (BBR). Thus, in the aerial triangulation stage, the master band of MiniMCA-12 was treated as a reference channel to link with DSLR RGB images. It means, all reference images from the master band of MiniMCA-12 and all RGB images were triangulated at the same time with same coordinate system of ground control points (GCP). Due to the spatial resolution of RGB images is higher than the MiniMCA-12, the GCP can be marked on the RGB images only even they cannot be recognized on the MiniMCA images. Furthermore, a one meter gridded digital surface model (DSM) is created by the RGB images and applied to the MiniMCA imagery for ortho-rectification. Quantitative error analyses show that the proposed BBR scheme can achieve 0.33 pixels of average misregistration residuals length and the co-registration errors among 12 MiniMCA ortho-images and between MiniMCA and Canon RGB ortho-images are all less than 0.6 pixels. The experimental results demonstrate that the proposed method is robust, reliable and accurate for future remote sensing applications.
Direct Penguin Counting Using Unmanned Aerial Vehicle Image
NASA Astrophysics Data System (ADS)
Hyun, C. U.; Kim, H. C.; Kim, J. H.; Hong, S. G.
2015-12-01
This study presents an application of unmanned aerial vehicle (UAV) images to monitor penguin colony in Baton Peninsula, King George Island, Antarctica. The area around Narębski Point located on the southeast coast of Barton Peninsula was designated as Antarctic Specially Protected Area No. 171 (ASPA 171), and Chinstrap and Gentoo penguins inhabit in this area. The UAV images were acquired in a part of ASPA 171 from four flights in a single day, Jan 18, 2014. About 360 images were mosaicked as an image of about 3 cm spatial resolution and then a subset including representative penguin rookeries was selected. The subset image was segmented based on gradient map of pixel values, and spectral and spatial attributes were assigned to each segment. The object based image analysis (OBIA) was conducted with consideration of spectral attributes including mean and minimum values of each segment and various shape attributes such as area, length, compactness and roundness to detect individual penguin. The segments indicating individual penguin were effectively detected on rookeries with high contrasts in the spectral and shape attributes. The importance of periodic and precise monitoring of penguins has been recognized because variations of their populations reflect environmental changes and disturbance from human activities. Utilization of very high resolution imaging method shown in this study can be applied to other penguin habitats in Antarctica, and the results will be able to support establishing effective environmental management plans.
Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery
NASA Astrophysics Data System (ADS)
Gao, Junfeng; Liao, Wenzhi; Nuyttens, David; Lootens, Peter; Vangeyte, Jürgen; Pižurica, Aleksandra; He, Yong; Pieters, Jan G.
2018-05-01
The developments in the use of unmanned aerial vehicles (UAVs) and advanced imaging sensors provide new opportunities for ultra-high resolution (e.g., less than a 10 cm ground sampling distance (GSD)) crop field monitoring and mapping in precision agriculture applications. In this study, we developed a strategy for inter- and intra-row weed detection in early season maize fields from aerial visual imagery. More specifically, the Hough transform algorithm (HT) was applied to the orthomosaicked images for inter-row weed detection. A semi-automatic Object-Based Image Analysis (OBIA) procedure was developed with Random Forests (RF) combined with feature selection techniques to classify soil, weeds and maize. Furthermore, the two binary weed masks generated from HT and OBIA were fused for accurate binary weed image. The developed RF classifier was evaluated by 5-fold cross validation, and it obtained an overall accuracy of 0.945, and Kappa value of 0.912. Finally, the relationship of detected weeds and their ground truth densities was quantified by a fitted linear model with a coefficient of determination of 0.895 and a root mean square error of 0.026. Besides, the importance of input features was evaluated, and it was found that the ratio of vegetation length and width was the most significant feature for the classification model. Overall, our approach can yield a satisfactory weed map, and we expect that the obtained accurate and timely weed map from UAV imagery will be applicable to realize site-specific weed management (SSWM) in early season crop fields for reducing spraying non-selective herbicides and costs.
A low-cost single-camera imaging system for aerial applicators
USDA-ARS?s Scientific Manuscript database
Agricultural aircraft provide a readily available and versatile platform for airborne remote sensing. Although various airborne imaging systems are available, most of these systems are either too expensive or too complex to be of practical use for aerial applicators. The objective of this study was ...
NASA Astrophysics Data System (ADS)
Hussnain, Zille; Oude Elberink, Sander; Vosselman, George
2016-06-01
In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.
USDA-ARS?s Scientific Manuscript database
Ultra high resolution digital aerial photography has great potential to complement or replace ground measurements of vegetation cover for rangeland monitoring and assessment. We investigated object-based image analysis (OBIA) techniques for classifying vegetation in southwestern U.S. arid rangelands...
USDA-ARS?s Scientific Manuscript database
Due to the availability of numerous spectral, spatial, and contextual features, the determination of optimal features and class separabilities can be a time consuming process in object-based image analysis (OBIA). While several feature selection methods have been developed to assist OBIA, a robust c...
Textural features for image classification
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Dinstein, I.; Shanmugam, K.
1973-01-01
Description of some easily computable textural features based on gray-tone spatial dependances, and illustration of their application in category-identification tasks of three different kinds of image data - namely, photomicrographs of five kinds of sandstones, 1:20,000 panchromatic aerial photographs of eight land-use categories, and ERTS multispectral imagery containing several land-use categories. Two kinds of decision rules are used - one for which the decision regions are convex polyhedra (a piecewise-linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89% for the photomicrographs, 82% for the aerial photographic imagery, and 83% for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification
Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.
Yu, Yunlong; Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.
InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Hamledari, Hesam
In this research, an envisioned automated intelligent robotic solution for automated indoor data collection and inspection that employs a series of unmanned aerial vehicles (UAV), entitled "InPRO", is presented. InPRO consists of four stages, namely: 1) automated path planning; 2) autonomous UAV-based indoor inspection; 3) automated computer vision-based assessment of progress; and, 4) automated updating of 4D building information models (BIM). The works presented in this thesis address the third stage of InPRO. A series of computer vision-based methods that automate the assessment of construction progress using images captured at indoor sites are introduced. The proposed methods employ computer vision and machine learning techniques to detect the components of under-construction indoor partitions. In particular, framing (studs), insulation, electrical outlets, and different states of drywall sheets (installing, plastering, and painting) are automatically detected using digital images. High accuracy rates, real-time performance, and operation without a priori information are indicators of the methods' promising performance.
Converting aerial imagery to application maps
USDA-ARS?s Scientific Manuscript database
Over the last couple of years in Agricultural Aviation and at the 2014 and 2015 NAAA conventions, we have written about and presented both single-camera and two-camera imaging systems for use on agricultural aircraft. Many aerial applicators have shown a great deal of interest in the imaging systems...
Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing
NASA Astrophysics Data System (ADS)
Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.
2016-06-01
In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).
Uav-Based 3d Urban Environment Monitoring
NASA Astrophysics Data System (ADS)
Boonpook, Wuttichai; Tan, Yumin; Liu, Huaqing; Zhao, Binbin; He, Lingfeng
2018-04-01
Unmanned Aerial Vehicle (UAV) based remote sensing can be used to make three-dimensions (3D) mapping with great flexibility, besides the ability to provide high resolution images. In this paper we propose a quick-change detection method on UAV images by combining altitude from Digital Surface Model (DSM) and texture analysis from images. Cases of UAV images with and without georeferencing are both considered. Research results show that the accuracy of change detection can be enhanced with georeferencing procedure, and the accuracy and precision of change detection on UAV images which are collected both vertically and obliquely but without georeferencing also have a good performance.
NASA Astrophysics Data System (ADS)
Chirayath, V.; Instrella, R.
2016-02-01
We present NASA ESTO FluidCam 1 & 2, Visible and NIR Fluid-Lensing-enabled imaging payloads for Unmanned Aerial Vehicles (UAVs). Developed as part of a focused 2014 earth science technology grant, FluidCam 1&2 are Fluid-Lensing-based computational optical imagers designed for automated 3D mapping and remote sensing of underwater coastal targets from airborne platforms. Fluid Lensing has been used to map underwater reefs in 3D in American Samoa and Hamelin Pool, Australia from UAV platforms at sub-cm scale, which has proven a valuable tool in modern marine research for marine biosphere assessment and conservation. We share FluidCam 1&2 instrument validation and testing results as well as preliminary processed data from field campaigns. Petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk reefs demonstrate broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to improving bathymetry data for physical oceanographic models and understanding climate change's impact on coastal zones, global oxygen production, carbon sequestration.
NASA Astrophysics Data System (ADS)
Chirayath, V.
2015-12-01
We present NASA ESTO FluidCam 1 & 2, Visible and NIR Fluid-Lensing-enabled imaging payloads for Unmanned Aerial Vehicles (UAVs). Developed as part of a focused 2014 earth science technology grant, FluidCam 1&2 are Fluid-Lensing-based computational optical imagers designed for automated 3D mapping and remote sensing of underwater coastal targets from airborne platforms. Fluid Lensing has been used to map underwater reefs in 3D in American Samoa and Hamelin Pool, Australia from UAV platforms at sub-cm scale, which has proven a valuable tool in modern marine research for marine biosphere assessment and conservation. We share FluidCam 1&2 instrument validation and testing results as well as preliminary processed data from field campaigns. Petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk reefs demonstrate broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to improving bathymetry data for physical oceanographic models and understanding climate change's impact on coastal zones, global oxygen production, carbon sequestration.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Sun, Yujie; Wang, Qiao
2018-07-01
In object-based image analysis (OBIA), object classification performance is jointly determined by image segmentation, sample or rule setting, and classifiers. Typically, as a crucial step to obtain object primitives, image segmentation quality significantly influences subsequent feature extraction and analyses. By contrast, template matching extracts specific objects from images and prevents shape defects caused by image segmentation. However, creating or editing templates is tedious and sometimes results in incomplete or inaccurate templates. In this study, we combine OBIA and template matching techniques to address these problems and aim for accurate photovoltaic panel (PVP) extraction from very high-resolution (VHR) aerial imagery. The proposed method is based on the previously proposed region-line primitive association framework, in which complementary information between region (segment) and line (straight line) primitives is utilized to achieve a more powerful performance than routine OBIA. Several novel concepts, including the mutual fitting ratio and best-fitting template based on region-line primitive association analyses, are proposed. Automatic template generation and matching method for PVP extraction from VHR imagery are designed for concept and model validation. Results show that the proposed method can successfully extract PVPs without any user-specified matching template or training sample. High user independency and accuracy are the main characteristics of the proposed method in comparison with routine OBIA and template matching techniques.
Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos
NASA Astrophysics Data System (ADS)
Miao, X.; Xie, H.
2015-12-01
High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.
True Ortho Generation of Urban Area Using High Resolution Aerial Photos
NASA Astrophysics Data System (ADS)
Hu, Yong; Stanley, David; Xin, Yubin
2016-06-01
The pros and cons of existing methods for true ortho generation are analyzed based on a critical literature review for its two major processing stages: visibility analysis and occlusion compensation. They process frame and pushbroom images using different algorithms for visibility analysis due to the need of perspective centers used by the z-buffer (or alike) techniques. For occlusion compensation, the pixel-based approach likely results in excessive seamlines in the ortho-rectified images due to the use of a quality measure on the pixel-by-pixel rating basis. In this paper, we proposed innovative solutions to tackle the aforementioned problems. For visibility analysis, an elevation buffer technique is introduced to employ the plain elevations instead of the distances from perspective centers by z-buffer, and has the advantage of sensor independency. A segment oriented strategy is developed to evaluate a plain cost measure per segment for occlusion compensation instead of the tedious quality rating per pixel. The cost measure directly evaluates the imaging geometry characteristics in ground space, and is also sensor independent. Experimental results are demonstrated using aerial photos acquired by UltraCam camera.
Monitoring and Assuring the Quality of Digital Aerial Data
NASA Technical Reports Server (NTRS)
Christopherson, Jon
2007-01-01
This viewgraph presentation explains the USGS plan for monitoring and assuring the quality of digital aerial data. The contents include: 1) History of USGS Aerial Imaging Involvement; 2) USGS Research and Results; 3) Outline of USGS Quality Assurance Plan; 4) Other areas of Interest; and 5) Summary
Shadow detection and removal in RGB VHR images for land use unsupervised classification
NASA Astrophysics Data System (ADS)
Movia, A.; Beinat, A.; Crosilla, F.
2016-09-01
Nowadays, high resolution aerial images are widely available thanks to the diffusion of advanced technologies such as UAVs (Unmanned Aerial Vehicles) and new satellite missions. Although these developments offer new opportunities for accurate land use analysis and change detection, cloud and terrain shadows actually limit benefits and possibilities of modern sensors. Focusing on the problem of shadow detection and removal in VHR color images, the paper proposes new solutions and analyses how they can enhance common unsupervised classification procedures for identifying land use classes related to the CO2 absorption. To this aim, an improved fully automatic procedure has been developed for detecting image shadows using exclusively RGB color information, and avoiding user interaction. Results show a significant accuracy enhancement with respect to similar methods using RGB based indexes. Furthermore, novel solutions derived from Procrustes analysis have been applied to remove shadows and restore brightness in the images. In particular, two methods implementing the so called "anisotropic Procrustes" and the "not-centered oblique Procrustes" algorithms have been developed and compared with the linear correlation correction method based on the Cholesky decomposition. To assess how shadow removal can enhance unsupervised classifications, results obtained with classical methods such as k-means, maximum likelihood, and self-organizing maps, have been compared to each other and with a supervised clustering procedure.
NASA Astrophysics Data System (ADS)
Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun
2017-09-01
Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.
Aircraft and satellite monitoring of water quality in Lake Superior near Duluth
NASA Technical Reports Server (NTRS)
Scherz, J. P.; Sydor, M.; Vandomelen, J. F.
1974-01-01
Satellite images and low altitude aerial photographs often show vivid discolorations in water bodies. Extensive laboratory analysis shows that water reflectance, which causes brightness on aerial images, positively correlates to the water quality parameter of turbidity, which on a particular day correlates to suspended solids. Work with low altitude photography on three overcast days and with ERTS images on five clear days provides positive correlation of image brightness to the high turbidity and solids which are present in Lake Superior near Duluth over 50% of the time. Proper use of aerial images would have shown that an $8,000,000 drinking water intake constructed in the midst of this unpotable, turbid water should have been located 6 miles north in clear, usable water. Noise effects such as skylight reflection, atmospheric effects, and depth penetration also must be understood for operational use of remote sensing for water quality monitoring and are considered in the paper.
Merabet, Youssef El; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja
2015-01-01
In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706
Application of high resolution images from unmanned aerial vehicles for hydrology and range science
USDA-ARS?s Scientific Manuscript database
A common problem in many natural resource disciplines is the lack of high-enough spatial resolution images that can be used for monitoring and modeling purposes. Advances have been made in the utilization of Unmanned Aerial Vehicles (UAVs) in hydrology and rangeland science. By utilizing low fligh...
Towards collaboration between unmanned aerial and ground vehicles for precision agriculture
NASA Astrophysics Data System (ADS)
Bhandari, Subodh; Raheja, Amar; Green, Robert L.; Do, Dat
2017-05-01
This paper presents the work being conducted at Cal Poly Pomona on the collaboration between unmanned aerial and ground vehicles for precision agriculture. The unmanned aerial vehicles (UAVs), equipped with multispectral/hyperspectral cameras and RGB cameras, take images of the crops while flying autonomously. The images are post processed or can be processed onboard. The processed images are used in the detection of unhealthy plants. Aerial data can be used by the UAVs and unmanned ground vehicles (UGVs) for various purposes including care of crops, harvest estimation, etc. The images can also be useful for optimized harvesting by isolating low yielding plants. These vehicles can be operated autonomously with limited or no human intervention, thereby reducing cost and limiting human exposure to agricultural chemicals. The paper discuss the autonomous UAV and UGV platforms used for the research, sensor integration, and experimental testing. Methods for ground truthing the results obtained from the UAVs will be used. The paper will also discuss equipping the UGV with a robotic arm for removing the unhealthy plants and/or weeds.
NASA Astrophysics Data System (ADS)
Haubeck, K.; Prinz, T.
2013-08-01
The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-08-12
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-01-01
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960
Uav Photgrammetric Workflows: a best Practice Guideline
NASA Astrophysics Data System (ADS)
Federman, A.; Santana Quintero, M.; Kretz, S.; Gregg, J.; Lengies, M.; Ouimet, C.; Laliberte, J.
2017-08-01
The increasing commercialization of unmanned aerial vehicles (UAVs) has opened the possibility of performing low-cost aerial image acquisition for the documentation of cultural heritage sites through UAV photogrammetry. The flying of UAVs in Canada is regulated through Transport Canada and requires a Special Flight Operations Certificate (SFOC) in order to fly. Various image acquisition techniques have been explored in this review, as well as well software used to register the data. A general workflow procedure has been formulated based off of the literature reviewed. A case study example of using UAV photogrammetry at Prince of Wales Fort is discussed, specifically in relation to the data acquisition and processing. Some gaps in the literature reviewed highlight the need for streamlining the SFOC application process, and incorporating UAVs into cultural heritage documentation courses.
Inventory of forest and rangeland and detection of forest stress. [Colorado and California
NASA Technical Reports Server (NTRS)
Heller, R. C.; Aldrich, R. C.; Weber, F. P.; Driscoll, R. S. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Disturbances in a forest environment that cause reductions in forest area, timber volume, and timber growth can be detected on ERTS-1 combined color composites. However, detection depends on comparing a conventional aerial photograph taken at some base year with an ERTS-1 image taken in some subsequent year. In a test made on the Atlanta site, 1:63,360 scale aerial photo index sheets made in 1966 were compared with ERTS-1 image 1264-15445 (April 1973). Five factors were found important to detection reliability: (1) the quality of the imagery; (2) the season of the imagery; (3) the size of the disturbed area; (4) the number of years since the disturbances; and (5) the type of cutting treatment. Of 209 disturbances verified on aerial photography, 165 (or approximately 80%) were detected on the ERTS-1 image by one independent interpreter. Improved training and additional experience in using this low resolution imagery should improve detection. Of the two seasons of data studies (fall and early spring), early spring is the best for detecting land use changes. Generally speaking, winter, early spring, and early summer are the best times of year for detecting forest disturbances.
a Method for Simultaneous Aerial and Terrestrial Geodata Acquisition for Corridor Mapping
NASA Astrophysics Data System (ADS)
Molina, P.; Blázquez, M.; Sastre, J.; Colomina, I.
2015-08-01
In this paper, we present mapKITE, a new mobile, simultaneous terrestrial and aerial, geodata collection and post-processing method. On one side, the method combines a terrestrial mobile mapping system (TMMS) with an unmanned aerial mapping one, both equipped with remote sensing payloads (at least, a nadir-looking visible-band camera in the UA) by means of which aerial and terrestrial geodata are acquired simultaneously. This tandem geodata acquisition system is based on a terrestrial vehicle (TV) and on an unmanned aircraft (UA) linked by a 'virtual tether', that is, a mechanism based on the real-time supply of UA waypoints by the TV. By means of the TV-to-UA tether, the UA follows the TV keeping a specific relative TV-to-UA spatial configuration enabling the simultaneous operation of both systems to obtain highly redundant and complementary geodata. On the other side, mapKITE presents a novel concept for geodata post-processing favoured by the rich geometrical aspects derived from the mapKITE tandem simultaneous operation. The approach followed for sensor orientation and calibration of the aerial images captured by the UA inherits the principles of Integrated Sensor Orientation (ISO) and adds the pointing-and-scaling photogrammetric measurement of a distinctive element observed in every UA image, which is a coded target mounted on the roof of the TV. By means of the TV navigation system, the orientation of the TV coded target is performed and used in the post-processing UA image orientation approach as a Kinematic Ground Control Point (KGCP). The geometric strength of a mapKITE ISO network is therefore high as it counts with the traditional tie point image measurements, static ground control points, kinematic aerial control and the new point-and-scale measurements of the KGCPs. With such a geometry, reliable system and sensor orientation and calibration and eventual further reduction of the number of traditional ground control points is feasible. The different technical concepts, challenges and breakthroughs behind mapKITE are presented in this paper, such as the TV-to-UA virtual tether and the use of KGCP measurements for UA sensor orientation. In addition, the use in mapKITE of new European GNSS signals such as the Galileo E5 AltBOC is discussed. Because of the critical role of GNSS technologies and the potential impact on the corridor mapping market, the European Commission and the European GNSS Agency, in the frame of the European Union Framework Programme for Research and Innovation "Horizon 2020," have recently awarded the "mapKITE" project to an international consortium of organizations coordinated by GeoNumerics S.L.
UAV-Based Thermal Imaging for High-Throughput Field Phenotyping of Black Poplar Response to Drought
Ludovisi, Riccardo; Tauro, Flavia; Salvati, Riccardo; Khoury, Sacha; Mugnozza Scarascia, Giuseppe; Harfouche, Antoine
2017-01-01
Poplars are fast-growing, high-yielding forest tree species, whose cultivation as second-generation biofuel crops is of increasing interest and can efficiently meet emission reduction goals. Yet, breeding elite poplar trees for drought resistance remains a major challenge. Worldwide breeding programs are largely focused on intra/interspecific hybridization, whereby Populus nigra L. is a fundamental parental pool. While high-throughput genotyping has resulted in unprecedented capabilities to rapidly decode complex genetic architecture of plant stress resistance, linking genomics to phenomics is hindered by technically challenging phenotyping. Relying on unmanned aerial vehicle (UAV)-based remote sensing and imaging techniques, high-throughput field phenotyping (HTFP) aims at enabling highly precise and efficient, non-destructive screening of genotype performance in large populations. To efficiently support forest-tree breeding programs, ground-truthing observations should be complemented with standardized HTFP. In this study, we develop a high-resolution (leaf level) HTFP approach to investigate the response to drought of a full-sib F2 partially inbred population (termed here ‘POP6’), whose F1 was obtained from an intraspecific P. nigra controlled cross between genotypes with highly divergent phenotypes. We assessed the effects of two water treatments (well-watered and moderate drought) on a population of 4603 trees (503 genotypes) hosted in two adjacent experimental plots (1.67 ha) by conducting low-elevation (25 m) flights with an aerial drone and capturing 7836 thermal infrared (TIR) images. TIR images were undistorted, georeferenced, and orthorectified to obtain radiometric mosaics. Canopy temperature (Tc) was extracted using two independent semi-automated segmentation techniques, eCognition- and Matlab-based, to avoid the mixed-pixel problem. Overall, results showed that the UAV platform-based thermal imaging enables to effectively assess genotype variability under drought stress conditions. Tc derived from aerial thermal imagery presented a good correlation with ground-truth stomatal conductance (gs) in both segmentation techniques. Interestingly, the HTFP approach was instrumental to detect drought-tolerant response in 25% of the population. This study shows the potential of UAV-based thermal imaging for field phenomics of poplar and other tree species. This is anticipated to have tremendous implications for accelerating forest tree genetic improvement against abiotic stress. PMID:29021803
UAV-Based Thermal Imaging for High-Throughput Field Phenotyping of Black Poplar Response to Drought.
Ludovisi, Riccardo; Tauro, Flavia; Salvati, Riccardo; Khoury, Sacha; Mugnozza Scarascia, Giuseppe; Harfouche, Antoine
2017-01-01
Poplars are fast-growing, high-yielding forest tree species, whose cultivation as second-generation biofuel crops is of increasing interest and can efficiently meet emission reduction goals. Yet, breeding elite poplar trees for drought resistance remains a major challenge. Worldwide breeding programs are largely focused on intra/interspecific hybridization, whereby Populus nigra L. is a fundamental parental pool. While high-throughput genotyping has resulted in unprecedented capabilities to rapidly decode complex genetic architecture of plant stress resistance, linking genomics to phenomics is hindered by technically challenging phenotyping. Relying on unmanned aerial vehicle (UAV)-based remote sensing and imaging techniques, high-throughput field phenotyping (HTFP) aims at enabling highly precise and efficient, non-destructive screening of genotype performance in large populations. To efficiently support forest-tree breeding programs, ground-truthing observations should be complemented with standardized HTFP. In this study, we develop a high-resolution (leaf level) HTFP approach to investigate the response to drought of a full-sib F 2 partially inbred population (termed here 'POP6'), whose F 1 was obtained from an intraspecific P. nigra controlled cross between genotypes with highly divergent phenotypes. We assessed the effects of two water treatments (well-watered and moderate drought) on a population of 4603 trees (503 genotypes) hosted in two adjacent experimental plots (1.67 ha) by conducting low-elevation (25 m) flights with an aerial drone and capturing 7836 thermal infrared (TIR) images. TIR images were undistorted, georeferenced, and orthorectified to obtain radiometric mosaics. Canopy temperature ( T c ) was extracted using two independent semi-automated segmentation techniques, eCognition- and Matlab-based, to avoid the mixed-pixel problem. Overall, results showed that the UAV platform-based thermal imaging enables to effectively assess genotype variability under drought stress conditions. T c derived from aerial thermal imagery presented a good correlation with ground-truth stomatal conductance ( g s ) in both segmentation techniques. Interestingly, the HTFP approach was instrumental to detect drought-tolerant response in 25% of the population. This study shows the potential of UAV-based thermal imaging for field phenomics of poplar and other tree species. This is anticipated to have tremendous implications for accelerating forest tree genetic improvement against abiotic stress.
Volumetric calculation using low cost unmanned aerial vehicle (UAV) approach
NASA Astrophysics Data System (ADS)
Rahman, A. A. Ab; Maulud, K. N. Abdul; Mohd, F. A.; Jaafar, O.; Tahar, K. N.
2017-12-01
Unmanned Aerial Vehicles (UAV) technology has evolved dramatically in the 21st century. It is used by both military and general public for recreational purposes and mapping work. Operating cost for UAV is much cheaper compared to that of normal aircraft and it does not require a large work space. The UAV systems have similar functions with the LIDAR and satellite images technologies. These systems require a huge cost, labour and time consumption to produce elevation and dimension data. Measurement of difficult objects such as water tank can also be done by using UAV. The purpose of this paper is to show the capability of UAV to compute the volume of water tank based on a different number of images and control points. The results were compared with the actual volume of the tank to validate the measurement. In this study, the image acquisition was done using Phantom 3 Professional, which is a low cost UAV. The analysis in this study is based on different volume computations using two and four control points with variety set of UAV images. The results show that more images will provide a better quality measurement. With 95 images and four GCP, the error percentage to the actual volume is about 5%. Four controls are enough to get good results but more images are needed, estimated about 115 until 220 images. All in all, it can be concluded that the low cost UAV has a potential to be used for volume of water and dimension measurement.
Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-03-19
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.
Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan
2009-01-01
Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data. PMID:22573971
USGS QA Plan: Certification of digital airborne mapping products
Christopherson, J.
2007-01-01
To facilitate acceptance of new digital technologies in aerial imaging and mapping, the US Geological Survey (USGS) and its partners have launched a Quality Assurance (QA) Plan for Digital Aerial Imagery. This should provide a foundation for the quality of digital aerial imagery and products. It introduces broader considerations regarding processes employed by aerial flyers in collecting, processing and delivering data, and provides training and information for US producers and users alike.
NASA Astrophysics Data System (ADS)
Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.
2015-04-01
Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.
Kite Aerial Photography (KAP) as a Tool for Field Teaching
ERIC Educational Resources Information Center
Sander, Lasse
2014-01-01
Kite aerial photography (KAP) is proposed as a creative tool for geography field teaching and as a medium to approach the complexity of readily available geodata. The method can be integrated as field experiment, surveying technique or group activity. The acquired aerial images can instantaneously be integrated in geographic information systems…
1983-05-01
Parallel Computation that Assign Canonical Object-Based Frames of Refer- ence," Proc. 7th it. .nt. Onf. on Artifcial Intellig nce (IJCAI-81), Vol. 2...Perception of Linear Struc- ture in Imaged Data ." TN 276, Artiflci!.a Intelligence Center, SRI International, Feb. 1983. [Fram75] J.P. Frain and E.S...1983 May 1983 D C By: Martin A. Fischler, Program Director S ELECTE Principal Investigator, (415)859-5106 MAY 2 21990 Artificial Intelligence Center
High-NA metrology and sensing on Berkeley MET5
NASA Astrophysics Data System (ADS)
Miyakawa, Ryan; Anderson, Chris; Naulleau, Patrick
2017-03-01
In this paper we compare two non-interferometric wavefront sensors suitable for in-situ high-NA EUV optical testing. The first is the AIS sensor, which has been deployed in both inspection and exposure tools. AIS is a compact, optical test that directly measures a wavefront by probing various parts of the imaging optic pupil and measuring localized wavefront curvature. The second is an image-based technique that uses an iterative algorithm based on simulated annealing to reconstruct a wavefront based on matching aerial images through focus. In this technique, customized illumination is used to probe the pupil at specific points to optimize differences in aberration signatures.
Houska, Treva R.; Johnson, A.P.
2012-01-01
The Global Visualization Viewer (GloVis) trifold provides basic information for online access to a subset of satellite and aerial photography collections from the U.S. Geological Survey Earth Resources Observation and Science (EROS) Center archive. The GloVis (http://glovis.usgs.gov/) browser-based utility allows users to search and download National Aerial Photography Program (NAPP), National High Altitude Photography (NHAP), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Earth Observing-1 (EO-1), Global Land Survey, Moderate Resolution Imaging Spectroradiometer (MODIS), and TerraLook data. Minimum computer system requirements and customer service contact information also are included in the brochure.
Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean
2015-01-01
At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables.
Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean
2015-01-01
At present, there is very limited information on the ecology, distribution, and structure of Cambodia’s tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman’s rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables. PMID:25902148
Precision measurements from very-large scale aerial digital imagery.
Booth, D Terrance; Cox, Samuel E; Berryman, Robert D
2006-01-01
Managers need measurements and resource managers need the length/width of a variety of items including that of animals, logs, streams, plant canopies, man-made objects, riparian habitat, vegetation patches and other things important in resource monitoring and land inspection. These types of measurements can now be easily and accurately obtained from very large scale aerial (VLSA) imagery having spatial resolutions as fine as 1 millimeter per pixel by using the three new software programs described here. VLSA images have small fields of view and are used for intermittent sampling across extensive landscapes. Pixel-coverage among images is influenced by small changes in airplane altitude above ground level (AGL) and orientation relative to the ground, as well as by changes in topography. These factors affect the object-to-camera distance used for image-resolution calculations. 'ImageMeasurement' offers a user-friendly interface for accounting for pixel-coverage variation among images by utilizing a database. 'LaserLOG' records and displays airplane altitude AGL measured from a high frequency laser rangefinder, and displays the vertical velocity. 'Merge' sorts through large amounts of data generated by LaserLOG and matches precise airplane altitudes with camera trigger times for input to the ImageMeasurement database. We discuss application of these tools, including error estimates. We found measurements from aerial images (collection resolution: 5-26 mm/pixel as projected on the ground) using ImageMeasurement, LaserLOG, and Merge, were accurate to centimeters with an error less than 10%. We recommend these software packages as a means for expanding the utility of aerial image data.
Video change detection for fixed wing UAVs
NASA Astrophysics Data System (ADS)
Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa
2017-10-01
In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the image processing and change detection, we use the approach of Muller.4 Although it was developed for unmanned ground vehicles (UGVs), it enables a near real time video change detection for aerial videos. Concluding, we discuss the demands on sensor systems in the matter of change detection.
Control of a Quadcopter Aerial Robot Using Optic Flow Sensing
NASA Astrophysics Data System (ADS)
Hurd, Michael Brandon
This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).
OPC modeling by genetic algorithm
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.
2005-05-01
Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.
Distinguishing plant population and variety with UAV-derived vegetation indices
NASA Astrophysics Data System (ADS)
Oakes, Joseph; Balota, Maria
2017-05-01
Variety selection and seeding rate are two important choice that a peanut grower must make. High yielding varieties can increase profit with no additional input costs, while seeding rate often determines input cost a grower will incur from seed costs. The overall purpose of this study was to examine the effect that seeding rate has on different peanut varieties. With the advent of new UAV technology, we now have the possibility to use indices collected with the UAV to measure emergence, seeding rate, growth rate, and perhaps make yield predictions. This information could enable growers to make management decisions early in the season based on low plant populations due to poor emergence, and could be a useful tool for growers to use to estimate plant population and growth rate in order to help achieve desired crop stands. Red-Green-Blue (RGB) and near-infrared (NIR) images were collected from a UAV platform starting two weeks after planting and continued weekly for the next six weeks. Ground NDVI was also collected each time aerial images were collected. Vegetation indices were derived from both the RGB and NIR images. Greener area (GGA- the proportion of green pixels with a hue angle from 80° to 120°) and a* (the average red/green color of the image) were derived from the RGB images while Normalized Differential Vegetative Index (NDVI) was derived from NIR images. Aerial indices were successful in distinguishing seeding rates and determining emergence during the first few weeks after planting, but not later in the season. Meanwhile, these aerial indices are not an adequate predictor of yield in peanut at this point.
NASA Astrophysics Data System (ADS)
Nurminen, Kimmo; Karjalainen, Mika; Yu, Xiaowei; Hyyppä, Juha; Honkavaara, Eija
2013-09-01
Recent research results have shown that the performance of digital surface model extraction using novel high-quality photogrammetric images and image matching is a highly competitive alternative to laser scanning. In this article, we proceed to compare the performance of these two methods in the estimation of plot-level forest variables. Dense point clouds extracted from aerial frame images were used to estimate the plot-level forest variables needed in a forest inventory covering 89 plots. We analyzed images with 60% and 80% forward overlaps and used test plots with off-nadir angles of between 0° and 20°. When compared to reference ground measurements, the airborne laser scanning (ALS) data proved to be the most accurate: it yielded root mean square error (RMSE) values of 6.55% for mean height, 11.42% for mean diameter, and 20.72% for volume. When we applied a forward overlap of 80%, the corresponding results from aerial images were 6.77% for mean height, 12.00% for mean diameter, and 22.62% for volume. A forward overlap of 60% resulted in slightly deteriorated RMSE values of 7.55% for mean height, 12.20% for mean diameter, and 22.77% for volume. According to our results, the use of higher forward overlap produced only slightly better results in the estimation of these forest variables. Additionally, we found that the estimation accuracy was not significantly impacted by the increase in the off-nadir angle. Our results confirmed that digital aerial photographs were about as accurate as ALS in forest resources estimation as long as a terrain model was available.
Mapping of forested wetland: use of Seasat radar images to complement conventional sources ( USA).
Place, J.L.
1985-01-01
Distinguishing forested wetland from dry forest using aerial photographs is handicapped because photographs often do not reveal the presence of water below tree canopies. Radar images obtained by the Seasat satellite reveal forested wetland as highly reflective patterns on the coastal plain between Maryland and Florida. Seasat radar images may complement aerial photographs for compiling maps of wetland. A test with experienced photointerpreters revealed that interpretation accuracy was significantly higher when using Seasat radar images than when using only conventional sources.-Author
NASA Astrophysics Data System (ADS)
Denner, Michele; Raubenheimer, Jacobus H.
2018-05-01
Historical aerial photography has become a valuable commodity in any country, as it provides a precise record of historical land management over time. In a developing country, such as South Africa, that has undergone enormous political and social change over the last years, such photography is invaluable as it provides a clear indication of past injustices and serves as an aid to addressing post-apartheid issues such as land reform and land redistribution. National mapping organisations throughout the world have vast repositories of such historical aerial photography. In order to effectively use these datasets in today's digital environment requires that it be georeferenced to an accuracy that is suitable for the intended purpose. Using image-to-image georeferencing techniques, this research sought to determine the accuracies achievable for ortho-rectifying large volumes of historical aerial imagery, against the national standard for ortho-rectification in South Africa, using two different types of scanning equipment. The research conducted four tests using aerial photography from different time epochs over a period of sixty years, where the ortho-rectification matched each test to an already ortho-rectified mosaic of a developed area of mixed land use. The results of each test were assessed in terms of visual accuracy, spatial accuracy and conformance to the national standard for ortho-rectification in South Africa. The results showed a decrease in the overall accuracy of the image as the epoch range between the historical image and the reference image increased. Recommendations on the applications possible given the different epoch ranges and scanning equipment used are provided.
NASA Astrophysics Data System (ADS)
Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling
2017-07-01
The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.
Object-based Image Classification of Arctic Sea Ice and Melt Ponds through Aerial Photos
NASA Astrophysics Data System (ADS)
Miao, X.; Xie, H.; Li, Z.; Lei, R.
2013-12-01
The last six years have marked the lowest Arctic summer sea ice extents in the modern era, with a new record summer minimum (3.4 million km2) set on 13 September 2012. It has been predicted that the Arctic could be free of summer ice within the next 25-30. The loss of Arctic summer ice could have serious consequences, such as higher water temperature due to the positive feedback of albedo, more powerful and frequent storms, rising sea levels, diminished habitats for polar animals, and more pollution due to fossil fuel exploitation and/ or increased traffic through the Northwest/ Northeast Passage. In these processes, melt ponds play an important role in Earth's radiation balance since they strongly absorb solar radiation rather than reflecting it as snow and ice do. Therefore, it is necessary to develop the ability of predicting the sea ice/ melt pond extents and space-time evolution, which is pivotal to prepare for the variation and uncertainty of the future environment, political, economic, and military needs. A lot of efforts have been put into Arctic sea ice modeling to simulate sea ice processes. However, these sea ice models were initiated and developed based on limited field surveys, aircraft or satellite image data. Therefore, it is necessary to collect high resolution sea ice aerial photo in a systematic way to tune up, validate, and improve models. Currently there are many sea ice aerial photos available, such as Chinese Arctic Exploration (CHINARE 2008, 2010, 2012), SHEBA 1998 and HOTRAX 2005. However, manually delineating of sea ice and melt pond from these images is time-consuming and labor-intensive. In this study, we use the object-based remote sensing classification scheme to extract sea ice and melt ponds efficiently from 1,727 aerial photos taken during the CHINARE 2010. The algorithm includes three major steps as follows. (1) Image segmentation groups the neighboring pixels into objects according to the similarity of spectral and texture information; (2) random forest ensemble classifier can distinguish the following objects: water, submerged ice, shadow, and ice/snow; and (3) polygon neighbor analysis can further separate melt ponds from submerged ice according to the spatial neighboring relationship. Our results illustrate the spatial distribution and morphological characters of melt ponds in different latitudes of the Arctic Pacific sector. This method can be applied to massive photos and images taken in past years and future years, in deriving the detailed sea ice and melt pond distribution and changes through years.
DUV or EUV: that is the question
NASA Astrophysics Data System (ADS)
Williamson, David M.
2000-11-01
Lord Rayleigh's well-known equations for resolution and depth of focus indicate that resolution is better improved by reducing the wavelength of light rather than by increasing the numerical aperture (NA) of the projection optics, particularly when NA is approaching its physical limit of 1.0 in air (or vacuum). Vector aerial image simulations of diffraction-limited Deep Ultraviolet (DUV) and Extreme Ultraviolet (EUV) lithographic systems verify this simple view, even though Rayleigh's constants in Microlithography are not constant because of a variety of image enhancement techniques that attempt to compensate for the shortcomings of the aerial image when it is pushed to the limit. The aerial image is not the whole story, however. The competition between DUV and EUV systems will be decided more by economic and technological factors such as risk, time and cost of development and cost of ownership. These in turn depend on cost, availability and quality of light sources, refracting materials, photoresists and reticles.
Efficient structure from motion on large scenes using UAV with position and pose information
NASA Astrophysics Data System (ADS)
Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang
2018-04-01
In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.
NASA Astrophysics Data System (ADS)
Kim, H.; Lee, J.; Choi, K.; Lee, I.
2012-07-01
Rapid responses for emergency situations such as natural disasters or accidents often require geo-spatial information describing the on-going status of the affected area. Such geo-spatial information can be promptly acquired by a manned or unmanned aerial vehicle based multi-sensor system that can monitor the emergent situations in near real-time from the air using several kinds of sensors. Thus, we are in progress of developing such a real-time aerial monitoring system (RAMS) consisting of both aerial and ground segments. The aerial segment acquires the sensory data about the target areas by a low-altitude helicopter system equipped with sensors such as a digital camera and a GPS/IMU system and transmits them to the ground segment through a RF link in real-time. The ground segment, which is a deployable ground station installed on a truck, receives the sensory data and rapidly processes them to generate ortho-images, DEMs, etc. In order to generate geo-spatial information, in this system, exterior orientation parameters (EOP) of the acquired images are obtained through direct geo-referencing because it is difficult to acquire coordinates of ground points in disaster area. The main process, since the data acquisition stage until the measurement of EOP, is discussed as follows. First, at the time of data acquisition, image acquisition time synchronized by GPS time is recorded as part of image file name. Second, the acquired data are then transmitted to the ground segment in real-time. Third, by processing software for ground segment, positions/attitudes of acquired images are calculated through a linear interpolation using the GPS time of the received position/attitude data and images. Finally, the EOPs of images are obtained from position/attitude data by deriving the relationships between a camera coordinate system and a GPS/IMU coordinate system. In this study, we evaluated the accuracy of the EOP decided by direct geo-referencing in our system. To perform this, we used the precisely calculated EOP through the digital photogrammetry workstation (DPW) as reference data. The results of the evaluation indicate that the accuracy of the EOP acquired by our system is reasonable in comparison with the performance of GPS/IMU system. Also our system can acquire precise multi-sensory data to generate the geo-spatial information in emergency situations. In the near future, we plan to complete the development of the rapid generation system of the ground segment. Our system is expected to be able to acquire the ortho-image and DEM on the damaged area in near real-time. Its performance along with the accuracy of the generated geo-spatial information will also be evaluated and reported in the future work.
Photogrammetric mapping using unmanned aerial vehicle
NASA Astrophysics Data System (ADS)
Graça, N.; Mitishita, E.; Gonçalves, J.
2014-11-01
Nowadays Unmanned Aerial Vehicle (UAV) technology has attracted attention for aerial photogrammetric mapping. The low cost and the feasibility to automatic flight along commanded waypoints can be considered as the main advantages of this technology in photogrammetric applications. Using GNSS/INS technologies the images are taken at the planned position of the exposure station and the exterior orientation parameters (position Xo, Yo, Zo and attitude ω, φ, χ) of images can be direct determined. However, common UAVs (off-the-shelf) do not replace the traditional aircraft platform. Overall, the main shortcomings are related to: difficulties to obtain the authorization to perform the flight in urban and rural areas, platform stability, safety flight, stability of the image block configuration, high number of the images and inaccuracies of the direct determination of the exterior orientation parameters of the images. In this paper are shown the obtained results from the project photogrammetric mapping using aerial images from the SIMEPAR UAV system. The PIPER J3 UAV Hydro aircraft was used. It has a micro pilot MP2128g. The system is fully integrated with 3-axis gyros/accelerometers, GPS, pressure altimeter, pressure airspeed sensors. A Sony Cyber-shot DSC-W300 was calibrated and used to get the image block. The flight height was close to 400 m, resulting GSD near to 0.10 m. The state of the art of the used technology, methodologies and the obtained results are shown and discussed. Finally advantages/shortcomings found in the study and main conclusions are presented
Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.
Kedzierski, Michal; Delis, Paulina
2016-06-23
The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.
Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization
Kedzierski, Michal; Delis, Paulina
2016-01-01
The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954
Completely optical orientation determination for an unstabilized aerial three-line camera
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2010-10-01
Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.
Crack identification for rigid pavements using unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Bahaddin Ersoz, Ahmet; Pekcan, Onur; Teke, Turker
2017-09-01
Pavement condition assessment is an essential piece of modern pavement management systems as rehabilitation strategies are planned based upon its outcomes. For proper evaluation of existing pavements, they must be continuously and effectively monitored using practical means. Conventionally, truck-based pavement monitoring systems have been in-use in assessing the remaining life of in-service pavements. Although such systems produce accurate results, their use can be expensive and data processing can be time consuming, which make them infeasible considering the demand for quick pavement evaluation. To overcome such problems, Unmanned Aerial Vehicles (UAVs) can be used as an alternative as they are relatively cheaper and easier-to-use. In this study, we propose a UAV based pavement crack identification system for monitoring rigid pavements’ existing conditions. The system consists of recently introduced image processing algorithms used together with conventional machine learning techniques, both of which are used to perform detection of cracks on rigid pavements’ surface and their classification. Through image processing, the distinct features of labelled crack bodies are first obtained from the UAV based images and then used for training of a Support Vector Machine (SVM) model. The performance of the developed SVM model was assessed with a field study performed along a rigid pavement exposed to low traffic and serious temperature changes. Available cracks were classified using the UAV based system and obtained results indicate it ensures a good alternative solution for pavement monitoring applications.
NASA Astrophysics Data System (ADS)
Chesley, J. T.; Leier, A. L.; White, S.; Torres, R.
2017-06-01
Recently developed data collection techniques allow for improved characterization of sedimentary outcrops. Here, we outline a workflow that utilizes unmanned aerial vehicles (UAV) and structure-from-motion (SfM) photogrammetry to produce sub-meter-scale outcrop reconstructions in 3-D. SfM photogrammetry uses multiple overlapping images and an image-based terrain extraction algorithm to reconstruct the location of individual points from the photographs in 3-D space. The results of this technique can be used to construct point clouds, orthomosaics, and digital surface models that can be imported into GIS and related software for further study. The accuracy of the reconstructed outcrops, with respect to an absolute framework, is improved with geotagged images or independently gathered ground control points, and the internal accuracy of 3-D reconstructions is sufficient for sub-meter scale measurements. We demonstrate this approach with a case study from central Utah, USA, where UAV-SfM data can help delineate complex features within Jurassic fluvial sandstones.
Beck, Marcus W.; Vondracek, Bruce C.; Hatch, Lorin K.; Vinje, Jason
2013-01-01
Lake resources can be negatively affected by environmental stressors originating from multiple sources and different spatial scales. Shoreline development, in particular, can negatively affect lake resources through decline in habitat quality, physical disturbance, and impacts on fisheries. The development of remote sensing techniques that efficiently characterize shoreline development in a regional context could greatly improve management approaches for protecting and restoring lake resources. The goal of this study was to develop an approach using high-resolution aerial photographs to quantify and assess docks as indicators of shoreline development. First, we describe a dock analysis workflow that can be used to quantify the spatial extent of docks using aerial images. Our approach incorporates pixel-based classifiers with object-based techniques to effectively analyze high-resolution digital imagery. Second, we apply the analysis workflow to quantify docks for 4261 lakes managed by the Minnesota Department of Natural Resources. Overall accuracy of the analysis results was 98.4% (87.7% based on ) after manual post-processing. The analysis workflow was also 74% more efficient than the time required for manual digitization of docks. These analyses have immediate relevance for resource planning in Minnesota, whereas the dock analysis workflow could be used to quantify shoreline development in other regions with comparable imagery. These data can also be used to better understand the effects of shoreline development on aquatic resources and to evaluate the effects of shoreline development relative to other stressors.
a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications
NASA Astrophysics Data System (ADS)
Dhanda, A.; Remondino, F.; Santana Quintero, M.
2018-05-01
This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.
a New Paradigm for Matching - and Aerial Images
NASA Astrophysics Data System (ADS)
Koch, T.; Zhuo, X.; Reinartz, P.; Fraundorfer, F.
2016-06-01
This paper investigates the performance of SIFT-based image matching regarding large differences in image scaling and rotation, as this is usually the case when trying to match images captured from UAVs and airplanes. This task represents an essential step for image registration and 3d-reconstruction applications. Various real world examples presented in this paper show that SIFT, as well as A-SIFT perform poorly or even fail in this matching scenario. Even if the scale difference in the images is known and eliminated beforehand, the matching performance suffers from too few feature point detections, ambiguous feature point orientations and rejection of many correct matches when applying the ratio-test afterwards. Therefore, a new feature matching method is provided that overcomes these problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions. This method is designed for matching almost nadir-directed images with low scene depth, as this is typical in UAV and aerial image matching scenarios. We tested the proposed method on different real world image pairs. While standard SIFT failed for most of the datasets, plenty of geometrical correct matches could be found using our approach. Comparing the estimated fundamental matrices and homographies with ground-truth solutions, mean errors of few pixels can be achieved.
Remote sensing for developing world agriculture: opportunities and areas for technical development
NASA Astrophysics Data System (ADS)
Jeunnette, Mark N.; Hart, Douglas P.
2016-10-01
A parameterized numerical model is constructed to compare platform options for collecting aerial imagery to support agriculture electronic information services in developing countries like India. A sensitivity analysis shows that when Unmanned Aerial Vehicles, UAVs, are limited in flight altitude by regulations, the velocity and altitude available to manned aircraft lead to a lower cost of operation at altitudes greater than 2000ft above ground level, AGL. If, however, the UAVs are allowed to fly higher, they become cost-competitive once again at approximately 1000ft AGL or higher. Examination of assumptions in the model highlights two areas for additional technology development: baseline-dependent feature-based image registration to enable wider area coverage, and reflectance reconstruction for ratio-based agriculture indices.
NASA Astrophysics Data System (ADS)
Leydsman-McGinty, E. I.; Ramsey, R. D.; McGinty, C.
2013-12-01
The Remote Sensing/GIS Laboratory at Utah State University, in cooperation with the United States Environmental Protection Agency, is quantifying impervious surfaces for three watershed sub-basins in Utah. The primary objective of developing watershed-scale quantifications of impervious surfaces is to provide an indicator of potential impacts to wetlands that occur within the Wasatch Front and along the Great Salt Lake. A geospatial layer of impervious surfaces can assist state agencies involved with Utah's Wetlands Program Plan (WPP) in understanding the impacts of impervious surfaces on wetlands, as well as support them in carrying out goals and actions identified in the WPP. The three watershed sub-basins, Lower Bear-Malad, Lower Weber, and Jordan, span the highly urbanized Wasatch Front and are consistent with focal areas in need of wetland monitoring and assessment as identified in Utah's WPP. Geospatial layers of impervious surface currently exist in the form of national and regional land cover datasets; however, these datasets are too coarse to be utilized in fine-scale analyses. In addition, the pixel-based image processing techniques used to develop these coarse datasets have proven insufficient in smaller scale or detailed studies, particularly when applied to high-resolution satellite imagery or aerial photography. Therefore, object-based image analysis techniques are being implemented to develop the geospatial layer of impervious surfaces. Object-based image analysis techniques employ a combination of both geospatial and image processing methods to extract meaningful information from high-resolution imagery. Spectral, spatial, textural, and contextual information is used to group pixels into image objects and then subsequently used to develop rule sets for image classification. eCognition, an object-based image analysis software program, is being utilized in conjunction with one-meter resolution National Agriculture Imagery Program (NAIP) aerial photography from 2011.
Profiles of gamma-ray and magnetic data from aerial surveys over the conterminous United States
Duval, Joseph S.; Riggle, Frederic E.
1999-01-01
This publication contains images for the conterminous U.S. generated from geophysical data, software for displaying and analyzing the images, and software for displaying and examining the profile data from the aerial surveys flown as part of the National Uranium Resource Evaluation (NURE) Program of the U.S. Department of Energy. The images included are of gamma-ray data (uranium, thorium, and potassium channels), Bouguer gravity data, isostatic residual gravity data, aeromagnetic anomalies, topography, and topography with bathymetry.
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Dai, Ming; Yin, Chuan-li
2017-09-01
Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.
Helicopter-based Photography for use in SfM over the West Greenland Ablation Zone
NASA Astrophysics Data System (ADS)
Mote, T. L.; Tedesco, M.; Astuti, I.; Cotten, D.; Jordan, T.; Rennermalm, A. K.
2015-12-01
Results of low-elevation high-resolution aerial photography from a helicopter are reported for a supraglacial watershed in West Greenland. Data were collected at the end of July 2015 over a supraglacial watershed terminating in the Kangerlussuaq region of Greenland and following the Utrecht University K-Transect of meteorological stations. The aerial photography reported here were complementary observations used to support hyperspectral measurements of albedo, discussed in the Greenland Ice sheet hydrology session of this AGU Fall meeting. A compact digital camera was installed inside a pod mounted on the side of the helicopter together with gyroscopes and accelerometers that were used to estimate the relative orientation. Continuous video was collected on 19 and 21 July flights, and frames extracted from the videos are used to create a series of aerial photos. Individual geo-located aerial photos were also taken on a 24 July flight. We demonstrate that by maintaining a constant flight elevation and a near constant ground speed, a helicopter with a mounted camera can produce 3-D structure of the ablation zone of the ice sheet at unprecedented spatial resolution of the order of 5 - 10 cm. By setting the intervalometer on the camera to 2 seconds, the images obtained provide sufficient overlap (>60%) for digital image alignment, even at a flight elevation of ~170m. As a result, very accurate point matching between photographs can be achieved and an extremely dense RGB encoded point cloud can be extracted. Overlapping images provide a series of stereopairs that can be used to create point cloud data consisting of 3 position and 3 color variables, X, Y, Z, R, G, and B. This point cloud is then used to create orthophotos or large scale digital elevation models, thus accurately displaying ice structure. The geo-referenced images provide a ground spatial resolution of approximately 6 cm, permitting analysis of detailed features, such as cryoconite holes, evolving small order streams, and cracks from hydrofracturing.
Coma measurement by transmission image sensor with a PSM
NASA Astrophysics Data System (ADS)
Wang, Fan; Wang, Xiangzhao; Ma, Mingying; Zhang, Dongqing; Shi, Weijie; Hu, Jianming
2005-01-01
As feature size decreases, especially with the use of resolution enhancement technique such as off axis illumination and phase shifting mask, fast and accurate in-situ measurement of coma has become very important in improving the performance of modern lithographic tools. The measurement of coma can be achieved by the transmission image sensor, which is an aerial image measurement device. The coma can be determined by measuring the positions of the aerial image at multiple illumination settings. In the present paper, we improve the measurement accuracy of the above technique with an alternating phase shifting mask. Using the scalar diffraction theory, we analyze the effect of coma on the aerial image. To analyze the effect of the alternating phase shifting mask, we compare the pupil filling of the mark used in the above technique with that of the phase-shifted mark used in the new technique. We calculate the coma-induced image displacements of the marks at multiple partial coherence and NA settings, using the PROLITH simulation program. The simulation results show that the accuracy of coma measurement can increase approximately 20 percent using the alternating phase shifting mask.
Yellow River Icicle Hazard Dynamic Monitoring Using UAV Aerial Remote Sensing Technology
NASA Astrophysics Data System (ADS)
Wang, H. B.; Wang, G. H.; Tang, X. M.; Li, C. H.
2014-02-01
Monitoring the response of Yellow River icicle hazard change requires accurate and repeatable topographic surveys. A new method based on unmanned aerial vehicle (UAV) aerial remote sensing technology is proposed for real-time data processing in Yellow River icicle hazard dynamic monitoring. The monitoring area is located in the Yellow River ice intensive care area in southern BaoTou of Inner Mongolia autonomous region. Monitoring time is from the 20th February to 30th March in 2013. Using the proposed video data processing method, automatic extraction covering area of 7.8 km2 of video key frame image 1832 frames took 34.786 seconds. The stitching and correcting time was 122.34 seconds and the accuracy was better than 0.5 m. Through the comparison of precise processing of sequence video stitching image, the method determines the change of the Yellow River ice and locates accurate positioning of ice bar, improving the traditional visual method by more than 100 times. The results provide accurate aid decision information for the Yellow River ice prevention headquarters. Finally, the effect of dam break is repeatedly monitored and ice break five meter accuracy is calculated through accurate monitoring and evaluation analysis.
Combining High Spatial Resolution Optical and LIDAR Data for Object-Based Image Classification
NASA Astrophysics Data System (ADS)
Li, R.; Zhang, T.; Geng, R.; Wang, L.
2018-04-01
In order to classify high spatial resolution images more accurately, in this research, a hierarchical rule-based object-based classification framework was developed based on a high-resolution image with airborne Light Detection and Ranging (LiDAR) data. The eCognition software is employed to conduct the whole process. In detail, firstly, the FBSP optimizer (Fuzzy-based Segmentation Parameter) is used to obtain the optimal scale parameters for different land cover types. Then, using the segmented regions as basic units, the classification rules for various land cover types are established according to the spectral, morphological and texture features extracted from the optical images, and the height feature from LiDAR respectively. Thirdly, the object classification results are evaluated by using the confusion matrix, overall accuracy and Kappa coefficients. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy.
NASA Technical Reports Server (NTRS)
1998-01-01
Positive Systems has worked in conjunction with Stennis Space Center to design the ADAR System 5500. This is a four-band airborne digital imaging system used to capture multispectral imagery similar to that available from satellite platforms such as Landsat, SPOT and the new generation of high resolution satellites. Positive Systems has provided remote sensing services for the development of digital aerial camera systems and software for commercial aerial imaging applications.
Creating Digital Environments for Multi-Agent Simulation
2003-12-01
foliage on a polygon to represent a tree). Tile A spatial partition of a coverage that shares the same set of feature classes with the same... orthophoto datasets can be made from rectified grayscale aerial images. These datasets can support various weapon systems, Command, Control...Raster Product Format (RPF) Standard. This data consists of unclassified seamless orthophotos , made from rectified grayscale aerial images. DOI 10
Mobile Aerial Tracking and Imaging System (MATRIS) for Aeronautical Research
NASA Technical Reports Server (NTRS)
Banks, Daniel W.; Blanchard, R. C.; Miller, G. M.
2004-01-01
A mobile, rapidly deployable ground-based system to track and image targets of aeronautical interest has been developed. Targets include reentering reusable launch vehicles (RLVs) as well as atmospheric and transatmospheric vehicles. The optics were designed to image targets in the visible and infrared wavelengths. To minimize acquisition cost and development time, the system uses commercially available hardware and software where possible. The conception and initial funding of this system originated with a study of ground-based imaging of global aerothermal characteristics of RLV configurations. During that study NASA teamed with the Missile Defense Agency/Innovative Science and Technology Experimentation Facility (MDA/ISTEF) to test techniques and analysis on two Space Shuttle flights.
Geometric Calibration and Validation of Ultracam Aerial Sensors
NASA Astrophysics Data System (ADS)
Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried
2016-03-01
We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.
Investigations on the Bundle Adjustment Results from Sfm-Based Software for Mapping Purposes
NASA Astrophysics Data System (ADS)
Lumban-Gaol, Y. A.; Murtiyoso, A.; Nugroho, B. H.
2018-05-01
Since its first inception, aerial photography has been used for topographic mapping. Large-scale aerial photography contributed to the creation of many of the topographic maps around the world. In Indonesia, a 2013 government directive on spatial management has re-stressed the need for topographic maps, with aerial photogrammetry providing the main method of acquisition. However, the large need to generate such maps is often limited by budgetary reasons. Today, SfM (Structure-from-Motion) offers quicker and less expensive solutions to this problem. However, considering the required precision for topographic missions, these solutions need to be assessed to see if they provide enough level of accuracy. In this paper, a popular SfM-based software Agisoft PhotoScan is used to perform bundle adjustment on a set of large-scale aerial images. The aim of the paper is to compare its bundle adjustment results with those generated by more classical photogrammetric software, namely Trimble Inpho and ERDAS IMAGINE. Furthermore, in order to provide more bundle adjustment statistics to be compared, the Damped Bundle Adjustment Toolbox (DBAT) was also used to reprocess the PhotoScan project. Results show that PhotoScan results are less stable than those generated by the two photogrammetric software programmes. This translates to lower accuracy, which may impact the final photogrammetric product.
NASA Astrophysics Data System (ADS)
Chrétien, L.-P.; Théau, J.; Ménard, P.
2015-08-01
Wildlife aerial surveys require time and significant resources. Multispecies detection could reduce costs to a single census for species that coexist spatially. Traditional methods are demanding for observers in terms of concentration and are not adapted to multispecies censuses. The processing of multispectral aerial imagery acquired from an unmanned aerial vehicle (UAV) represents a potential solution for multispecies detection. The method used in this study is based on a multicriteria object-based image analysis applied on visible and thermal infrared imagery acquired from a UAV. This project aimed to detect American bison, fallow deer, gray wolves, and elks located in separate enclosures with a known number of individuals. Results showed that all bison and elks were detected without errors, while for deer and wolves, 0-2 individuals per flight line were mistaken with ground elements or undetected. This approach also detected simultaneously and separately the four targeted species even in the presence of other untargeted ones. These results confirm the potential of multispectral imagery acquired from UAV for wildlife census. Its operational application remains limited to small areas related to the current regulations and available technology. Standardization of the workflow will help to reduce time and expertise requirements for such technology.
NASA Astrophysics Data System (ADS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-05-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
NASA Technical Reports Server (NTRS)
Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
Stanford automatic photogrammetry research
NASA Technical Reports Server (NTRS)
Quam, L. H.; Hannah, M. J.
1974-01-01
A feasibility study on the problem of computer automated aerial/orbital photogrammetry is documented. The techniques investigated were based on correlation matching of small areas in digitized pairs of stereo images taken from high altitude or planetary orbit, with the objective of deriving a 3-dimensional model for the surface of a planet.
Evaluating Great Lakes bald eagle nesting habitat with Bayesian inference
Teryl G. Grubb; William W. Bowerman; Allen J. Bath; John P. Giesy; D. V. Chip Weseloh
2003-01-01
Bayesian inference facilitated structured interpretation of a nonreplicated, experience-based survey of potential nesting habitat for bald eagles (Haliaeetus leucocephalus) along the five Great Lakes shorelines. We developed a pattern recognition (PATREC) model of our aerial search image with six habitat attributes: (a) tree cover, (b) proximity and...
Ayhan, E; Erden, O; Gormus, E T
2008-12-01
Nowadays, cities are developing and changing rapidly due to the increases in the population and immigration. Rapid changing brings obligation to control the cities by planning. The satellite images and the aerial photographs enable us to track the urban development and provide the opportunity to get the current data about urban. With the help of these images, cities may have interrogated dynamic structures. This study is composed of three steps. In the first step, orthophoto images have been generated in order to track urban developments by using the aerial photographs and the satellite images. In this step, the panchromatic (PAN), the multi spectral (MS) and the pan-sharpened image of IKONOS satellite have been used as input satellite data and the accuracy of orthophoto images has been investigated in detail, in terms of digital elevation model (DEM), control points, input images and their properties. In the second step, a 3D city model with database has been generated with the help of orthophoto images and the vector layouts. And in the last step, up to date urban information obtained from 3D city model. This study shows that it is possible to detect the unlicensed buildings and the areas which are going to be nationalized and it also shows that it is easy to document the existing alterations in the cities with the help of current development plans and orthophoto images. And since accessing updated data is very essential to control development and monitor the temporal alterations in urban areas, in this study it is proven that the orthophoto images generated by using aerial photos and satellite images are very reliable to use in obtaining topographical information, in change detection and in city planning. When digital orthophoto images used with GIS, they provide quick decision control mechanisms and quick data collection. Besides, they help to find efficient solutions in a short time in the planning applications.
NASA Astrophysics Data System (ADS)
Zhang, Xunxun; Xu, Hongke; Fang, Jianwu
2018-01-01
Along with the rapid development of the unmanned aerial vehicle technology, multiple vehicle tracking (MVT) in aerial video sequence has received widespread interest for providing the required traffic information. Due to the camera motion and complex background, MVT in aerial video sequence poses unique challenges. We propose an efficient MVT algorithm via driver behavior-based Kalman filter (DBKF) and an improved deterministic data association (IDDA) method. First, a hierarchical image registration method is put forward to compensate the camera motion. Afterward, to improve the accuracy of the state estimation, we propose the DBKF module by incorporating the driver behavior into the Kalman filter, where artificial potential field is introduced to reflect the driver behavior. Then, to implement the data association, a local optimization method is designed instead of global optimization. By introducing the adaptive operating strategy, the proposed IDDA method can also deal with the situation in which the vehicles suddenly appear or disappear. Finally, comprehensive experiments on the DARPA VIVID data set and KIT AIS data set demonstrate that the proposed algorithm can generate satisfactory and superior results.
NASA Astrophysics Data System (ADS)
Young, Andrew; Marshall, Stephen; Gray, Alison
2016-05-01
The use of aerial hyperspectral imagery for the purpose of remote sensing is a rapidly growing research area. Currently, targets are generally detected by looking for distinct spectral features of the objects under surveillance. For example, a camouflaged vehicle, deliberately designed to blend into background trees and grass in the visible spectrum, can be revealed using spectral features in the near-infrared spectrum. This work aims to develop improved target detection methods, using a two-stage approach, firstly by development of a physics-based atmospheric correction algorithm to convert radiance into re ectance hyperspectral image data and secondly by use of improved outlier detection techniques. In this paper the use of the Percentage Occupancy Hit or Miss Transform is explored to provide an automated method for target detection in aerial hyperspectral imagery.
Bag of Lines (BoL) for Improved Aerial Scene Representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
Open source software and low cost sensors for teaching UAV science
NASA Astrophysics Data System (ADS)
Kefauver, S. C.; Sanchez-Bragado, R.; El-Haddad, G.; Araus, J. L.
2016-12-01
Drones, also known as UASs (unmanned aerial systems), UAVs (Unmanned Aerial Vehicles) or RPAS (Remotely piloted aircraft systems), are both useful advanced scientific platforms and recreational toys that are appealing to younger generations. As such, they can make for excellent education tools as well as low-cost scientific research project alternatives. However, the process of taking pretty pictures to remote sensing science can be daunting if one is presented with only expensive software and sensor options. There are a number of open-source tools and low cost platform and sensor options available that can provide excellent scientific research results, and, by often requiring more user-involvement than commercial software and sensors, provide even greater educational benefits. Scale-invariant feature transform (SIFT) algorithm implementations, such as the Microsoft Image Composite Editor (ICE), which can create quality 2D image mosaics with some motion and terrain adjustments and VisualSFM (Structure from Motion), which can provide full image mosaicking with movement and orthorectification capacities. RGB image quantification using alternate color space transforms, such as the BreedPix indices, can be calculated via plugins in the open-source software Fiji (http://fiji.sc/Fiji; http://github.com/george-haddad/CIMMYT). Recent analyses of aerial images from UAVs over different vegetation types and environments have shown RGB metrics can outperform more costly commercial sensors. Specifically, Hue-based pixel counts, the Triangle Greenness Index (TGI), and the Normalized Green Red Difference Index (NGRDI) consistently outperformed NDVI in estimating abiotic and biotic stress impacts on crop health. Also, simple kits are available for NDVI camera conversions. Furthermore, suggestions for multivariate analyses of the different RGB indices in the "R program for statistical computing", such as classification and regression trees can allow for a more approachable interpretation of results in the classroom.
NASA Astrophysics Data System (ADS)
Leon-Perez, M.; Hernandez, W. J.; Armstrong, R.
2016-02-01
Reported cases of seagrass loss have increased over the last 40 years, increasing the awareness of the need for assessing seagrass health. In situ monitoring has been the main method to assess spatial and temporal changes in seagrass ecosystem. Although remote sensing techniques with multispectral imagery have been recently used for these purposes, long-term analysis is limited to the sensor's mission life. The objective of this project is to determine long-term changes in seagrass habitat cover at Caja de Muertos Island Nature Reserve, by combining in situ data with a satellite image and historical aerial photography. A current satellite imagery of the WorldView-2 sensor was used to generate a 2014 benthic habitat map for the study area. The multispectral image was pre-processed using: conversion of digital numbers to radiance, and atmospheric and water column corrections. Object-based image analysis was used to segment the image into polygons representing different benthic habitats and to classify those habitats according to the classification scheme developed for this project. The scheme include the following benthic habitat categories: seagrass (sparse, dense and very dense), colonized hard bottom (sparse, dense and very dense), sand and mix algae on unconsolidated sediments. Field work was used to calibrate the satellite-derived benthic maps and to asses accuracy of the final products. In addition, a time series of satellite imagery and historic aerial photography from 1950 to 2014 provided data to assess long-term changes in seagrass habitat cover within the Reserve. Preliminary results show an increase in seagrass habitat cover, contrasting with the worldwide declining trend. The results of this study will provide valuable information for the conservation and management of seagrass habitat in the Caja de Muertos Island Nature Reserve.
Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-01-01
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767
On a Fundamental Evaluation of a Uav Equipped with a Multichannel Laser Scanner
NASA Astrophysics Data System (ADS)
Nakano, K.; Suzuki, H.; Omori, K.; Hayakawa, K.; Kurodai, M.
2018-05-01
Unmanned aerial vehicles (UAVs), which have been widely used in various fields such as archaeology, agriculture, mining, and construction, can acquire high-resolution images at the millimetre scale. It is possible to obtain realistic 3D models using high-overlap images and 3D reconstruction software based on computer vision technologies such as Structure from Motion and Multi-view Stereo. However, it remains difficult to obtain key points from surfaces with limited texture such as new asphalt or concrete, or from areas like forests that may be concealed by vegetation. A promising method for conducting aerial surveys is through the use of UAVs equipped with laser scanners. We conducted a fundamental performance evaluation of the Velodyne VLP-16 multi-channel laser scanner equipped to a DJI Matrice 600 Pro UAV at a construction site. Here, we present our findings with respect to both the geometric and radiometric aspects of the acquired data.
Use of micro unmanned aerial vehicles for roadside condition assessment
DOT National Transportation Integrated Search
2010-12-01
Micro unmanned aerial vehicles (MUAVs) that are equipped with digital imaging systems and global : positioning systems provide a potential opportunity for improving the effectiveness and safety of roadside : condition and inventory surveys. This stud...
NASA Astrophysics Data System (ADS)
Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter
This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.
Detection of Tree Crowns Based on Reclassification Using Aerial Images and LIDAR Data
NASA Astrophysics Data System (ADS)
Talebi, S.; Zarea, A.; Sadeghian, S.; Arefi, H.
2013-09-01
Tree detection using aerial sensors in early decades was focused by many researchers in different fields including Remote Sensing and Photogrammetry. This paper is intended to detect trees in complex city areas using aerial imagery and laser scanning data. Our methodology is a hierarchal unsupervised method consists of some primitive operations. This method could be divided into three sections, in which, first section uses aerial imagery and both second and third sections use laser scanners data. In the first section a vegetation cover mask is created in both sunny and shadowed areas. In the second section Rate of Slope Change (RSC) is used to eliminate grasses. In the third section a Digital Terrain Model (DTM) is obtained from LiDAR data. By using DTM and Digital Surface Model (DSM) we would get to Normalized Digital Surface Model (nDSM). Then objects which are lower than a specific height are eliminated. Now there are three result layers from three sections. At the end multiplication operation is used to get final result layer. This layer will be smoothed by morphological operations. The result layer is sent to WG III/4 to evaluate. The evaluation result shows that our method has a good rank in comparing to other participants' methods in ISPRS WG III/4, when assessed in terms of 5 indices including area base completeness, area base correctness, object base completeness, object base correctness and boundary RMS. With regarding of being unsupervised and automatic, this method is improvable and could be integrate with other methods to get best results.
NASA Astrophysics Data System (ADS)
Cogliati, M.; Tonelli, E.; Battaglia, D.; Scaioni, M.
2017-12-01
Archive aerial photos represent a valuable heritage to provide information about land content and topography in the past years. Today, the availability of low-cost and open-source solutions for photogrammetric processing of close-range and drone images offers the chance to provide outputs such as DEM's and orthoimages in easy way. This paper is aimed at demonstrating somehow and to which level of accuracy digitized archive aerial photos may be used within a such kind of low-cost software (Agisoft Photoscan Professional®) to generate photogrammetric outputs. Different steps of the photogrammetric processing workflow are presented and discussed. The main conclusion is that this procedure may come to provide some final products, which however do not feature the high accuracy and resolution that may be obtained using high-end photogrammetric software packages specifically designed for aerial survey projects. In the last part a case study is presented about the use of four-epoch archive of aerial images to analyze the area where a tunnel has to be excavated.
Estimating Slopes In Images Of Terrain By Use Of BRDF
NASA Technical Reports Server (NTRS)
Scholl, Marija S.
1995-01-01
Proposed method of estimating slopes of terrain features based on use of bidirectional reflectivity distribution function (BRDF) in analyzing aerial photographs, satellite video images, or other images produced by remote sensors. Estimated slopes integrated along horizontal coordinates to obtain estimated heights; generating three-dimensional terrain maps. Method does not require coregistration of terrain features in pairs of images acquired from slightly different perspectives nor requires Sun or other source of illumination to be low in sky over terrain of interest. On contrary, best when Sun is high. Works at almost all combinations of illumination and viewing angles.
NASA Astrophysics Data System (ADS)
Harrington, J.; Peltzer, G.; Leprince, S.; Ayoub, F.; Kasser, M.
2011-12-01
We present new measurements of the surface deformation associated with the rifting event of 1978 in the Asal-Ghoubbet rift, Republic of Djibouti. The Asal-Ghoubbet rift forms a component of the Afar Depression, a broad extensional region at the junction between the Nubia, Arabia, and Somalia plates, which apart from Iceland, is the only spreading center located above sea-level. The 1978 rifting event was marked by a 2-month sequence of small to moderate earthquakes (Mb ~3-5) and a fissural eruption of the Ardukoba Volcano. Deformation in the Asal rift associated with the event included the reactivation of the main bordering faults and the development of numerous open fissures on the rift floor. The movement of the rift shoulders, measured using ground-based geodesy, showed up to 2.5 m of opening in the N40E direction. Our data include historical aerial photographs from 1962 and 1984 (less than 0.8 m/pixel) along the northern border fault, three KH-9 Hexagon(~8 m/pixel) satellite images from 1973, and recently acquired ASTER (15 m/pixel) and SPOT5 (2.5 m/pixel) data. The measurements are made by correlating pre- and post-event images using the COSI-Corr (Co-registration of Optically Sensed Images and Correlation) software developed at Caltech. The ortho-rectification of the images is done with a mosaic of a 10 m resolution digital elevation model, made by French Institut Geographique National (IGN), and the SRTM and GDEM datasets. Correlation results from the satellite images indicate 2-3 meters of opening across the rift. Preliminary results obtained using the 1962 and 1984 aerial photographs indicate that a large fraction of the opening occurred on or near Fault γ, which borders the rift to the North. These preliminary results are largely consistent with the ground based measurements made after the event. A complete analysis of the aerial photograph coverage will provide a better characterization of the spatial distribution of the deformation throughout the rift.
Zhang, Dongyan; Zhou, Xingen; Zhang, Jian; Lan, Yubin; Xu, Chao; Liang, Dong
2018-01-01
Detection and monitoring are the first essential step for effective management of sheath blight (ShB), a major disease in rice worldwide. Unmanned aerial systems have a high potential of being utilized to improve this detection process since they can reduce the time needed for scouting for the disease at a field scale, and are affordable and user-friendly in operation. In this study, a commercialized quadrotor unmanned aerial vehicle (UAV), equipped with digital and multispectral cameras, was used to capture imagery data of research plots with 67 rice cultivars and elite lines. Collected imagery data were then processed and analyzed to characterize the development of ShB and quantify different levels of the disease in the field. Through color features extraction and color space transformation of images, it was found that the color transformation could qualitatively detect the infected areas of ShB in the field plots. However, it was less effective to detect different levels of the disease. Five vegetation indices were then calculated from the multispectral images, and ground truths of disease severity and GreenSeeker measured NDVI (Normalized Difference Vegetation Index) were collected. The results of relationship analyses indicate that there was a strong correlation between ground-measured NDVIs and image-extracted NDVIs with the R2 of 0.907 and the root mean square error (RMSE) of 0.0854, and a good correlation between image-extracted NDVIs and disease severity with the R2 of 0.627 and the RMSE of 0.0852. Use of image-based NDVIs extracted from multispectral images could quantify different levels of ShB in the field plots with an accuracy of 63%. These results demonstrate that a customer-grade UAV integrated with digital and multispectral cameras can be an effective tool to detect the ShB disease at a field scale.
Observation of coral reefs on Ishigaki Island, Japan, using Landsat TM images and aerial photographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsunaga, Tsuneo; Kayanne, Hajime
1997-06-01
Ishigaki Island is located at the southwestern end of Japanese Islands and famous for its fringing coral reefs. More than twenty LANDSAT TM images in twelve years and aerial photographs taken on 1977 and 1994 were used to survey two shallow reefs on this island, Shiraho and Kabira. Intensive field surveys were also conducted in 1995. All satellite images of Shiraho were geometrically corrected and overlaid to construct a multi-date satellite data set. The effects of solar elevation and tide on satellite imagery were studied with this data set. The comparison of aerial and satellite images indicated that significant changesmore » occurred between 1977 and 1984 in Kabira: rapid formation in the western part and decrease in the eastern part of dark patches. The field surveys revealed that newly formed dark patches in the west contain young corals. These results suggest that remote sensing is useful for not only mapping but also monitoring of shallow coral reefs.« less
NASA Astrophysics Data System (ADS)
Roth, Lukas; Aasen, Helge; Walter, Achim; Liebisch, Frank
2018-07-01
Extraction of leaf area index (LAI) is an important prerequisite in numerous studies related to plant ecology, physiology and breeding. LAI is indicative for the performance of a plant canopy and of its potential for growth and yield. In this study, a novel method to estimate LAI based on RGB images taken by an unmanned aerial system (UAS) is introduced. Soybean was taken as the model crop of investigation. The method integrates viewing geometry information in an approach related to gap fraction theory. A 3-D simulation of virtual canopies helped developing and verifying the underlying model. In addition, the method includes techniques to extract plot based data from individual oblique images using image projection, as well as image segmentation applying an active learning approach. Data from a soybean field experiment were used to validate the method. The thereby measured LAI prediction accuracy was comparable with the one of a gap fraction-based handheld device (R2 of 0.92 , RMSE of 0.42 m 2m-2) and correlated well with destructive LAI measurements (R2 of 0.89 , RMSE of 0.41 m2 m-2). These results indicate that, if respecting the range (LAI ≤ 3) the method was tested for, extracting LAI from UAS derived RGB images using viewing geometry information represents a valid alternative to destructive and optical handheld device LAI measurements in soybean. Thereby, we open the door for automated, high-throughput assessment of LAI in plant and crop science.
Automated Snow Extent Mapping Based on Orthophoto Images from Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Niedzielski, Tomasz; Spallek, Waldemar; Witek-Kasprzak, Matylda
2018-04-01
The paper presents the application of the k-means clustering in the process of automated snow extent mapping using orthophoto images generated using the Structure-from-Motion (SfM) algorithm from oblique aerial photographs taken by unmanned aerial vehicle (UAV). A simple classification approach has been implemented to discriminate between snow-free and snow-covered terrain. The procedure uses the k-means clustering and classifies orthophoto images based on the three-dimensional space of red-green-blue (RGB) or near-infrared-red-green (NIRRG) or near-infrared-green-blue (NIRGB) bands. To test the method, several field experiments have been carried out, both in situations when snow cover was continuous and when it was patchy. The experiments have been conducted using three fixed-wing UAVs (swinglet CAM by senseFly, eBee by senseFly, and Birdie by FlyTech UAV) on 10/04/2015, 23/03/2016, and 16/03/2017 within three test sites in the Izerskie Mountains in southwestern Poland. The resulting snow extent maps, produced automatically using the classification method, have been validated against real snow extents delineated through a visual analysis and interpretation offered by human analysts. For the simplest classification setup, which assumes two classes in the k-means clustering, the extent of snow patches was estimated accurately, with areal underestimation of 4.6% (RGB) and overestimation of 5.5% (NIRGB). For continuous snow cover with sparse discontinuities at places where trees or bushes protruded from snow, the agreement between automatically produced snow extent maps and observations was better, i.e. 1.5% (underestimation with RGB) and 0.7-0.9% (overestimation, either with RGB or with NIRRG). Shadows on snow were found to be mainly responsible for the misclassification.
Aerial Images from AN Uav System: 3d Modeling and Tree Species Classification in a Park Area
NASA Astrophysics Data System (ADS)
Gini, R.; Passoni, D.; Pinto, L.; Sona, G.
2012-07-01
The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs) is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment): it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes). Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs) were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS) and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.
Aerial LED signage by use of crossed-mirror array
NASA Astrophysics Data System (ADS)
Yamamoto, Hirotsugu; Kujime, Ryousuke; Bando, Hiroki; Suyama, Shiro
2013-03-01
3D representation of digital signage improves its significance and rapid notification of important points. Real 3D display techniques such as volumetric 3D displays are effective for use of 3D for public signs because it provides not only binocular disparity but also motion parallax and other cues, which will give 3D impression even people with abnormal binocular vision. Our goal is to realize aerial 3D LED signs. We have specially designed and fabricated a reflective optical device to form an aerial image of LEDs with a wide field angle. The developed reflective optical device composed of crossed-mirror array (CMA). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. The depth between LED lamps is represented in the same depth in the floating 3D image. Floating image of LEDs was formed in wide range of incident angle with a peak reflectance at 35 deg. The image size of focused beam (point spread function) agreed to the apparent aperture size.
NASA Astrophysics Data System (ADS)
Liu, Tao; Abd-Elrahman, Amr
2018-05-01
Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.
1. NORTHWEST OBLIQUE AERIAL VIEW OF FORT DELAWARE AND PEA ...
1. NORTHWEST OBLIQUE AERIAL VIEW OF FORT DELAWARE AND PEA PATCH ISLAND. REMAINS OF SEA WALL VISIBLE IN FOREGROUND AND RIGHT OF IMAGE. - Fort Delaware, Sea Wall, Pea Patch Island, Delaware City, New Castle County, DE
Unmanned aerial vehicles (UAVs) for surveying marine fauna: a dugong case study.
Hodgson, Amanda; Kelly, Natalie; Peel, David
2013-01-01
Aerial surveys of marine mammals are routinely conducted to assess and monitor species' habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km(2) area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as 'certain' (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys.
Unmanned Aerial Vehicles (UAVs) for Surveying Marine Fauna: A Dugong Case Study
Hodgson, Amanda; Kelly, Natalie; Peel, David
2013-01-01
Aerial surveys of marine mammals are routinely conducted to assess and monitor species’ habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km2 area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as ‘certain’ (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys. PMID:24223967
NASA Astrophysics Data System (ADS)
Cai, Z.; Liu, W.; Luo, G.; Xiang, Z.
2018-04-01
The key technologies in the real scene 3D modeling of oblique photography mainly include the data acquisition of oblique photography, layout and surveying of photo control points, oblique camera calibration, aerial triangulation, dense matching of multi-angle image, building of triangulation irregular network (TIN) and TIN simplification and automatic texture mapping, among which aerial triangulation is the core and the results of aerial triangulation directly affect the later model effect and the corresponding data accuracy. Starting from this point of view, this paper aims to study the practical technologies of aerial triangulation for real scene 3D modeling with oblique photography and finally proposes a technical method of aerial triangulation with oblique photography which can be put into practice.
Planarity constrained multi-view depth map reconstruction for urban scenes
NASA Astrophysics Data System (ADS)
Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie
2018-05-01
Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.
Design and realization of an AEC&AGC system for the CCD aerial camera
NASA Astrophysics Data System (ADS)
Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun
2015-08-01
An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.
NASA Astrophysics Data System (ADS)
Silva, T. S. F.; Torres, R. S.; Morellato, P.
2017-12-01
Vegetation phenology is a key component of ecosystem function and biogeochemical cycling, and highly susceptible to climatic change. Phenological knowledge in the tropics is limited by lack of monitoring, traditionally done by laborious direct observation. Ground-based digital cameras can automate daily observations, but also offer limited spatial coverage. Imaging by low-cost Unmanned Aerial Systems (UAS) combines the fine resolution of ground-based methods with and unprecedented capability for spatial coverage, but challenges remain in producing color-consistent multitemporal images. We evaluated the applicability of multitemporal UAS imaging to monitor phenology in tropical altitudinal grasslands and forests, answering: 1) Can very-high resolution aerial photography from conventional digital cameras be used to reliably monitor vegetative and reproductive phenology? 2) How is UAS monitoring affected by changes in illumination and by sensor physical limitations? We flew imaging missions monthly from Feb-16 to Feb-17, using a UAS equipped with an RGB Canon SX260 camera. Flights were carried between 10am and 4pm, at 120-150m a.g.l., yielding 5-10cm spatial resolution. To compensate illumination changes caused by time of day, season and cloud cover, calibration was attempted using reference targets and empirical models, as well as color space transformations. For vegetative phenological monitoring, multitemporal response was severely affected by changes in illumination conditions, strongly confounding the phenological signal. These variations could not be adequately corrected through calibration due to sensor limitations. For reproductive phenology, the very-high resolution of the acquired imagery allowed discrimination of individual reproductive structures for some species, and its stark colorimetric differences to vegetative structures allowed detection of the reproductive timing on the HSV color space, despite illumination effects. We conclude that reliable vegetative phenology monitoring may exceed the capabilities of consumer cameras, but reproductive phenology can be successfully monitored for species with conspicuous reproductive structures. Further research is being conducted to improve calibration methods and information extraction through machine learning.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Leger, E.; Peterson, J.; Falco, N.; Wainwright, H. M.; Wu, Y.; Tran, A. P.; Brodie, E.; Williams, K. H.; Versteeg, R.; Hubbard, S. S.
2017-12-01
Improving understanding and modelling of terrestrial systems requires advances in measuring and quantifying interactions among subsurface, land surface and vegetation processes over relevant spatiotemporal scales. Such advances are important to quantify natural and managed ecosystem behaviors, as well as to predict how watershed systems respond to increasingly frequent hydrological perturbations, such as droughts, floods and early snowmelt. Our study focuses on the joint use of UAV-based multi-spectral aerial imaging, ground-based geophysical tomographic monitoring (incl., electrical and electromagnetic imaging) and point-scale sensing (soil moisture sensors and soil sampling) to quantify interactions between above and below ground compartments of the East River Watershed in the Upper Colorado River Basin. We evaluate linkages between physical properties (incl. soil composition, soil electrical conductivity, soil water content), metrics extracted from digital surface and terrain elevation models (incl., slope, wetness index) and vegetation properties (incl., greenness, plant type) in a 500 x 500 m hillslope-floodplain subsystem of the watershed. Data integration and analysis is supported by numerical approaches that simulate the control of soil and geomorphic characteristic on hydrological processes. Results provide an unprecedented window into critical zone interactions, revealing significant below- and above-ground co-dynamics. Baseline geophysical datasets provide lithological structure along the hillslope, which includes a surface soil horizon, underlain by a saprolite layer and the fractured Mancos shale. Time-lapse geophysical data show very different moisture dynamics in various compartments and locations during the winter and growing season. Integration with aerial imaging reveals a significant linkage between plant growth and the subsurface wetness, soil characteristics and the topographic gradient. The obtained information about the organization and connectivity of the landscape is being transferred to larger regions using aerial imaging and will be used to constrain multi-scale, multi-physics hydro-biogeochemical simulations of the East River watershed response to hydrological perturbations.
NASA Astrophysics Data System (ADS)
Shibuya, Masato; Takada, Akira; Nakashima, Toshiharu
2016-04-01
In optical lithography, high-performance exposure tools are indispensable to obtain not only fine patterns but also preciseness in pattern width. Since an accurate theoretical method is necessary to predict these values, some pioneer and valuable studies have been proposed. However, there might be some ambiguity or lack of consensus regarding the treatment of diffraction by object, incoming inclination factor onto image plane in scalar imaging theory, and paradoxical phenomenon of the inclined entrance plane wave onto image in vector imaging theory. We have reconsidered imaging theory in detail and also phenomenologically resolved the paradox. By comparing theoretical aerial image intensity with experimental pattern width for one-dimensional pattern, we have validated our theoretical consideration.
Mobile Aerial Tracking and Imaging System (MATrIS) for Aeronautical Research
NASA Technical Reports Server (NTRS)
Banks, Daniel W.; Blanchard, Robert C.; Miller, Geoffrey M.
2004-01-01
A mobile, rapidly deployable ground-based system to track and image targets of aeronautical interest has been developed. Targets include reentering reusable launch vehicles as well as atmospheric and transatmospheric vehicles. The optics were designed to image targets in the visible and infrared wavelengths. To minimize acquisition cost and development time, the system uses commercially available hardware and software where possible. The conception and initial funding of this system originated with a study of ground-based imaging of global aerothermal characteristics of reusable launch vehicle configurations. During that study the National Aeronautics and Space Administration teamed with the Missile Defense Agency/Innovative Science and Technology Experimentation Facility to test techniques and analysis on two Space Shuttle flights.
Gonzalez, Luis F.; Montes, Glen A.; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J.
2016-01-01
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification. PMID:26784196
Gonzalez, Luis F; Montes, Glen A; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J
2016-01-14
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.
Automated training site selection for large-area remote-sensing image analysis
NASA Astrophysics Data System (ADS)
McCaffrey, Thomas M.; Franklin, Steven E.
1993-11-01
A computer program is presented to select training sites automatically from remotely sensed digital imagery. The basic ideas are to guide the image analyst through the process of selecting typical and representative areas for large-area image classifications by minimizing bias, and to provide an initial list of potential classes for which training sites are required to develop a classification scheme or to verify classification accuracy. Reducing subjectivity in training site selection is achieved by using a purely statistical selection of homogeneous sites which then can be compared to field knowledge, aerial photography, or other remote-sensing imagery and ancillary data to arrive at a final selection of sites to be used to train the classification decision rules. The selection of the homogeneous sites uses simple tests based on the coefficient of variance, the F-statistic, and the Student's i-statistic. Comparisons of site means are conducted with a linear growing list of previously located homogeneous pixels. The program supports a common pixel-interleaved digital image format and has been tested on aerial and satellite optical imagery. The program is coded efficiently in the C programming language and was developed under AIX-Unix on an IBM RISC 6000 24-bit color workstation.
Overall design of imaging spectrometer on-board light aircraft
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhongqi, H.; Zhengkui, C.; Changhua, C.
1996-11-01
Aerial remote sensing is the earliest remote sensing technical system and has gotten rapid development in recent years. The development of aerial remote sensing was dominated by high to medium altitude platform in the past, and now it is characterized by the diversity platform including planes of high-medium-low flying altitude, helicopter, airship, remotely controlled airplane, glider, and balloon. The widely used and rapidly developed platform recently is light aircraft. Early in the close of 1970s, Beijing Research Institute of Uranium Geology began aerial photography and geophysical survey using light aircraft, and put forward the overall design scheme of light aircraftmore » imaging spectral application system (LAISAS) in 19905. LAISAS is comprised of four subsystem. They are called measuring platform, data acquiring subsystem, ground testing and data processing subsystem respectively. The principal instruments of LAISAS include measuring platform controlled by inertia gyroscope, aerial spectrometer with high spectral resolution, imaging spectrometer, 3-channel scanner, 128-channel imaging spectrometer, GPS, illuminance-meter, and devices for atmospheric parameters measuring, ground testing, data correction and processing. LAISAS has the features of integrity from data acquisition to data processing and to application; of stability which guarantees the image quality and is comprised of measuring, ground testing device, and in-door data correction system; of exemplariness of integrated the technology of GIS, GPS, and Image Processing System; of practicality which embodied LAISAS with flexibility and high ratio of performance to cost. So, it can be used in the fields of fundamental research of Remote Sensing and large-scale mapping for resource exploration, environmental monitoring, calamity prediction, and military purpose.« less
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching
Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng
2017-01-01
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.
Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng
2017-09-08
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.
NASA Astrophysics Data System (ADS)
Qayyum, Abdul; Saad, Naufal M.; Kamel, Nidal; Malik, Aamir Saeed
2018-01-01
The monitoring of vegetation near high-voltage transmission power lines and poles is tedious. Blackouts present a huge challenge to power distribution companies and often occur due to tree growth in hilly and rural areas. There are numerous methods of monitoring hazardous overgrowth that are expensive and time-consuming. Accurate estimation of tree and vegetation heights near power poles can prevent the disruption of power transmission in vulnerable zones. This paper presents a cost-effective approach based on a convolutional neural network (CNN) algorithm to compute the height (depth maps) of objects proximal to power poles and transmission lines. The proposed CNN extracts and classifies features by employing convolutional pooling inputs to fully connected data layers that capture prominent features from stereo image patches. Unmanned aerial vehicle or satellite stereo image datasets can thus provide a feasible and cost-effective approach that identifies threat levels based on height and distance estimations of hazardous vegetation and other objects. Results were compared with extant disparity map estimation techniques, such as graph cut, dynamic programming, belief propagation, and area-based methods. The proposed method achieved an accuracy rate of 90%.
A low-cost drone based application for identifying and mapping of coastal fish nursery grounds
NASA Astrophysics Data System (ADS)
Ventura, Daniele; Bruno, Michele; Jona Lasinio, Giovanna; Belluscio, Andrea; Ardizzone, Giandomenico
2016-03-01
Acquiring seabed, landform or other topographic data in the field of marine ecology has a pivotal role in defining and mapping key marine habitats. However, accessibility for this kind of data with a high level of detail for very shallow and inaccessible marine habitats has been often challenging, time consuming. Spatial and temporal coverage often has to be compromised to make more cost effective the monitoring routine. Nowadays, emerging technologies, can overcome many of these constraints. Here we describe a recent development in remote sensing based on a small unmanned drone (UAVs) that produce very fine scale maps of fish nursery areas. This technology is simple to use, inexpensive, and timely in producing aerial photographs of marine areas. Both technical details regarding aerial photos acquisition (drone and camera settings) and post processing workflow (3D model generation with Structure From Motion algorithm and photo-stitching) are given. Finally by applying modern algorithm of semi-automatic image analysis and classification (Maximum Likelihood, ECHO and Object-based Image Analysis) we compared the results of three thematic maps of nursery area for juvenile sparid fishes, highlighting the potential of this method in mapping and monitoring coastal marine habitats.
NASA Astrophysics Data System (ADS)
Smaczyński, Maciej; Medyńska-Gulij, Beata
2017-06-01
Unmanned aerial vehicles are increasingly being used in close range photogrammetry. Real-time observation of the Earth's surface and the photogrammetric images obtained are used as material for surveying and environmental inventory. The following study was conducted on a small area (approximately 1 ha). In such cases, the classical method of topographic mapping is not accurate enough. The geodetic method of topographic surveying, on the other hand, is an overly precise measurement technique for the purpose of inventorying the natural environment components. The author of the following study has proposed using the unmanned aerial vehicle technology and tying in the obtained images to the control point network established with the aid of GNSS technology. Georeferencing the acquired images and using them to create a photogrammetric model of the studied area enabled the researcher to perform calculations, which yielded a total root mean square error below 9 cm. The performed comparison of the real lengths of the vectors connecting the control points and their lengths calculated on the basis of the photogrammetric model made it possible to fully confirm the RMSE calculated and prove the usefulness of the UAV technology in observing terrain components for the purpose of environmental inventory. Such environmental components include, among others, elements of road infrastructure, green areas, but also changes in the location of moving pedestrians and vehicles, as well as other changes in the natural environment that are not registered on classical base maps or topographic maps.
Photogrammetric 3d Building Reconstruction from Thermal Images
NASA Astrophysics Data System (ADS)
Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.
2017-08-01
This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.
Earth Surface Monitoring with COSI-Corr, Techniques and Applications
NASA Astrophysics Data System (ADS)
Leprince, S.; Ayoub, F.; Avouac, J.
2009-12-01
Co-registration of Optically Sensed Images and Correlation (COSI-Corr) is a software package developed at the California Institute of Technology (USA) for accurate geometrical processing of optical satellite and aerial imagery. Initially developed for the measurement of co-seismic ground deformation using optical imagery, COSI-Corr is now used for a wide range of applications in Earth Sciences, which take advantage of the software capability to co-register, with very high accuracy, images taken from different sensors and acquired at different times. As long as a sensor is supported in COSI-Corr, all images between the supported sensors can be accurately orthorectified and co-registered. For example, it is possible to co-register a series of SPOT images, a series of aerial photographs, as well as to register a series of aerial photographs with a series of SPOT images, etc... Currently supported sensors include the SPOT 1-5, Quickbird, Worldview 1 and Formosat 2 satellites, the ASTER instrument, and frame camera acquisitions from e.g., aerial survey or declassified satellite imagery. Potential applications include accurate change detection between multi-temporal and multi-spectral images, and the calibration of pushbroom cameras. In particular, COSI-Corr provides a powerful correlation tool, which allows for accurate estimation of surface displacement. The accuracy depends on many factors (e.g., cloud, snow, and vegetation cover, shadows, temporal changes in general, steadiness of the imaging platform, defects of the imaging system, etc...) but in practice, the standard deviation of the measurements obtained from the correlation of mutli-temporal images is typically around 1/20 to 1/10 of the pixel size. The software package also includes post-processing tools such as denoising, destriping, and stacking tools to facilitate data interpretation. Examples drawn from current research in, e.g., seismotectonics, glaciology, and geomorphology will be presented. COSI-Corr is developed in IDL (Interactive Data Language), integrated under the user friendly interface ENVI (Environment for Visualizing Images), and is distributed free of charge for academic research purposes.
Oblique Aerial Photography Tool for Building Inspection and Damage Assessment
NASA Astrophysics Data System (ADS)
Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.
2014-11-01
Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.
Target detection method by airborne and spaceborne images fusion based on past images
NASA Astrophysics Data System (ADS)
Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng
2017-11-01
To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).
Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong
2016-02-06
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.
NASA Technical Reports Server (NTRS)
Lockwood, H. E.
1975-01-01
A series of Earth Resources Aircraft Program data flights were made over an aerial test range in Arizona for the evaluation of large cameras. Specifically, both medium altitude and high altitude flights were made to test and evaluate a series of color as well as black-and-white films. Image degradation, inherent in duplication processing, was studied. Resolution losses resulting from resolution characteristics of the film types are given. Color duplicates, in general, are shown to be degraded more than black-and-white films because of the limitations imposed by available aerial color duplicating stock. Results indicate that a greater resolution loss may be expected when the original has higher resolution. Photographs of the duplications are shown.
Sanchez, Richard D.; Hothem, Larry D.
2002-01-01
High-resolution airborne and satellite image sensor systems integrated with onboard data collection based on the Global Positioning System (GPS) and inertial navigation systems (INS) may offer a quick and cost-effective way to gather accurate topographic map information without ground control or aerial triangulation. The Applanix Corporation?s Position and Orientation Solutions for Direct Georeferencing of aerial photography was used in this project to examine the positional accuracy of integrated GPS/INS for terrain mapping in Glen Canyon, Arizona. The research application in this study yielded important information on the usefulness and limits of airborne integrated GPS/INS data-capture systems for mapping.
Casado, Monica Rivas; Gonzalez, Rocio Ballesteros; Kriechbaumer, Thomas; Veal, Amanda
2015-11-04
European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management.
Aerial photography for sensing plant anomalies
NASA Technical Reports Server (NTRS)
Gausman, H. W.; Cardenas, R.; Hart, W. G.
1970-01-01
Changes in the red tonal response of Kodak Ektrachrome Infrared Aero 8443 film (EIR) are often incorrectly attributed solely to variations in infrared light reflectance of plant leaves, when the primary influence is a difference in visible light reflectance induced by varying chlorophyll contents. Comparisons are made among aerial photographic images of high- and low-chlorophyll foliage. New growth, foot rot, and boron and chloride nutrient toxicites produce low-chlorophyll foliage, and EIR transparency images of light red or white compared with dark-red images of high-chlorophyll foliage. Deposits of the sooty mold fungus that subsists on the honeydew produced by brown soft scale insects, obscure the citrus leaves' green color. Infected trees appear as black images on EIR film transparencies compared with red images of healthy trees.
Online Aerial Terrain Mapping for Ground Robot Navigation
Peterson, John; Chaudhry, Haseeb; Abdelatty, Karim; Bird, John; Kochersberger, Kevin
2018-01-01
This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle’s overhead view to inform the ground vehicle’s path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These terrain classes are used to estimate relative navigation costs for the ground vehicle so energy-efficient paths may be generated and then executed. The two vehicles are registered in a common coordinate frame using a real-time kinematic global positioning system (RTK GPS) and all image processing is performed onboard the unmanned aerial vehicle, which minimizes the data exchanged between the vehicles. This paper describes the architecture of the system and quantifies the registration errors between the vehicles. PMID:29461496
Online Aerial Terrain Mapping for Ground Robot Navigation.
Peterson, John; Chaudhry, Haseeb; Abdelatty, Karim; Bird, John; Kochersberger, Kevin
2018-02-20
This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle's overhead view to inform the ground vehicle's path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These terrain classes are used to estimate relative navigation costs for the ground vehicle so energy-efficient paths may be generated and then executed. The two vehicles are registered in a common coordinate frame using a real-time kinematic global positioning system (RTK GPS) and all image processing is performed onboard the unmanned aerial vehicle, which minimizes the data exchanged between the vehicles. This paper describes the architecture of the system and quantifies the registration errors between the vehicles.
NASA Astrophysics Data System (ADS)
Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue
2015-04-01
Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
REMOTE SENSING OF SEAGRASS WITH AVIRIS AND HIGH ALTITUDE AERIAL PHOTOGRAPHY
On May 15,2002 AVIRlS (Advanced VisuaJ/lnfrared Imaging Spectrometer) data and high altitude aerial photographs were acquired tor coastal .waters from Cape Lookout to Oregon Inlet, North Carolina. The study encompasses extensive areas of seagrass, federally protected submersed, r...
Image selection system. [computerized data storage and retrieval system
NASA Technical Reports Server (NTRS)
Knutson, M. A.; Hurd, D.; Hubble, L.; Kroeck, R. M.
1974-01-01
An image selection (ISS) was developed for the NASA-Ames Research Center Earth Resources Aircraft Project. The ISS is an interactive, graphics oriented, computer retrieval system for aerial imagery. An analysis of user coverage requests and retrieval strategies is presented, followed by a complete system description. Data base structure, retrieval processors, command language, interactive display options, file structures, and the system's capability to manage sets of selected imagery are described. A detailed example of an area coverage request is graphically presented.
Advanced Tie Feature Matching for the Registration of Mobile Mapping Imaging Data and Aerial Imagery
NASA Astrophysics Data System (ADS)
Jende, P.; Peter, M.; Gerke, M.; Vosselman, G.
2016-06-01
Mobile Mapping's ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform's position. Consequently, acquired data products' positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images' geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability of putative matches and the utilisation of templates instead of feature descriptors. In our experiments discussed in this paper, typical urban scenes have been used for evaluating the proposed method. Even though no additional outlier removal techniques have been used, our method yields almost 90% of correct correspondences. However, repetitive image patterns may still induce ambiguities which cannot be fully averted by this technique. Hence and besides, possible advancements will be briefly presented.
Aerial image measurement technique for automated reticle defect disposition (ARDD) in wafer fabs
NASA Astrophysics Data System (ADS)
Zibold, Axel M.; Schmid, Rainer M.; Stegemann, B.; Scheruebl, Thomas; Harnisch, Wolfgang; Kobiyama, Yuji
2004-08-01
The Aerial Image Measurement System (AIMS)* for 193 nm lithography emulation has been brought into operation successfully worldwide. A second generation system comprising 193 nm AIMS capability, mini-environment and SMIF, the AIMS fab 193 plus is currently introduced into the market. By adjustment of numerical aperture (NA), illumination type and partial illumination coherence to match the conditions in 193 nm steppers or scanners, it can emulate the exposure tool for any type of reticles like binary, OPC and PSM down to the 65 nm node. The system allows a rapid prediction of wafer printability of defects or defect repairs, and critical features, like dense patterns or contacts on the masks without the need to perform expensive image qualification consisting of test wafer exposures followed by SEM measurements. Therefore, AIMS is a mask quality verification standard for high-end photo masks and established in mask shops worldwide. The progress on the AIMS technology described in this paper will highlight that besides mask shops there will be a very beneficial use of the AIMS in the wafer fab and we propose an Automated Reticle Defect Disposition (ARDD) process. With smaller nodes, where design rules are 65 nm or less, it is expected that smaller defects on reticles will occur in increasing numbers in the wafer fab. These smaller mask defects will matter more and more and become a serious yield limiting factor. With increasing mask prices and increasing number of defects and severability on reticles it will become cost beneficial to perform defect disposition on the reticles in wafer production. Currently ongoing studies demonstrate AIMS benefits for wafer fab applications. An outlook will be given for extension of 193 nm aerial imaging down to the 45 nm node based on emulation of immersion scanners.
Counter Unmanned Aerial Systems Testing: Evaluation of VIS SWIR MWIR and LWIR passive imagers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birch, Gabriel Carlisle; Woo, Bryana Lynn
This report contains analysis of unmanned aerial systems as imaged by visible, short-wave infrared, mid-wave infrared, and long-wave infrared passive devices. Testing was conducted at the Nevada National Security Site (NNSS) during the week of August 15, 2016. Target images in all spectral bands are shown and contrast versus background is reported. Calculations are performed to determine estimated pixels-on-target for detection and assessment levels, and the number of pixels needed to cover a hemisphere for detection or assessment at defined distances. Background clutter challenges are qualitatively discussed for different spectral bands, and low contrast scenarios are highlighted for long-wave infraredmore » imagers.« less
EROS main image file - A picture perfect database for Landsat imagery and aerial photography
NASA Technical Reports Server (NTRS)
Jack, R. F.
1984-01-01
The Earth Resources Observation System (EROS) Program was established by the U.S. Department of the Interior in 1966 under the administration of the Geological Survey. It is primarily concerned with the application of remote sensing techniques for the management of natural resources. The retrieval system employed to search the EROS database is called INORAC (Inquiry, Ordering, and Accounting). A description is given of the types of images identified in EROS, taking into account Landsat imagery, Skylab images, Gemini/Apollo photography, and NASA aerial photography. Attention is given to retrieval commands, geographic coordinate searching, refinement techniques, various online functions, and questions regarding the access to the EROS Main Image File.
Bakó, Gábor; Tolnai, Márton; Takács, Ádám
2014-01-01
Remote sensing is a method that collects data of the Earth's surface without causing disturbances. Thus, it is worthwhile to use remote sensing methods to survey endangered ecosystems, as the studied species will behave naturally while undisturbed. The latest passive optical remote sensing solutions permit surveys from long distances. State-of-the-art highly sensitive sensor systems allow high spatial resolution image acquisition at high altitudes and at high flying speeds, even in low-visibility conditions. As the aerial imagery captured by an airplane covers the entire study area, all the animals present in that area can be recorded. A population assessment is conducted by visual interpretations of an ortho image map. The basic objective of this study is to determine whether small- and medium-sized bird species are recognizable in the ortho images by using high spatial resolution aerial cameras. The spatial resolution needed for identifying the bird species in the ortho image map was studied. The survey was adjusted to determine the number of birds in a colony at a given time. PMID:25046012
Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca
2013-01-01
The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r2=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance. PMID:24146963
Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca
2013-01-01
The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r(2)=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance.
USDA-ARS?s Scientific Manuscript database
Advances in technologies associated with unmanned aerial vehicles (UAVs) has allowed for researchers, farmers and agribusinesses to incorporate UAVs coupled with various imaging systems into data collection activities and aid expert systems for making decisions. Multispectral imageries allow for a q...
14. Aerial view showing bldg grouping with bldg #2 intact ...
14. Aerial view showing bldg grouping with bldg #2 intact previous to fire (long pitched roof with 7 distinct dormers near image center) - photo by Eastern Topographics, Wolfeboro, N.H., Sept. 1985 - Lawrence Machine Shop, Building No. 2, Union & Canal Streets, Lawrence, Essex County, MA
ISSUES IN DIGITAL IMAGE PROCESSING OF AERIAL PHOTOGRAPHY FOR MAPPING SUBMERSED AQUATIC VEGETATION
The paper discusses the numerous issues that needed to be addressed when developing a methodology for mapping Submersed Aquatic Vegetation (SAV) from digital aerial photography. Specifically, we discuss 1) choice of film; 2) consideration of tide and weather constraints; 3) in-s...
NASA Astrophysics Data System (ADS)
Niedzielski, Tomasz; Stec, Magdalena; Wieczorek, Malgorzata; Slopek, Jacek; Jurecka, Miroslawa
2016-04-01
The objective of this work is to discuss the usefulness of the k-mean method in the process of detecting persons on oblique aerial photographs acquired by unmanned aerial vehicles (UAVs). The detection based on the k-mean procedure belongs to one of the modules of a larger Search and Rescue (SAR) system which is being developed at the University of Wroclaw, Poland (research project no. IP2014 032773 financed by the Ministry of Science and Higher Education of Poland). The module automatically processes individual geotagged visual-light UAV-taken photographs or their orthorectified versions. Firstly, we separate red (R), green (G) and blue (B) channels, express raster data as numeric matrices and acquire coordinates of centres of images using the exchangeable image file format (EXIF). Subsequently, we divide the matrices into matrices of smaller dimensions, the latter being associated with the size of spatial window which is suitable for discriminating between human and terrain. Each triplet of the smaller matrices (R, G and B) serves as input spatial data for the k-mean classification. We found that, in several configurations of the k-mean parameters, it is possible to distinguish a separate class which characterizes a person. We compare the skills of this approach by performing two experiments, based on UAV-taken RGB photographs and their orthorectified versions. This allows us to verify the hypothesis that the two exercises lead to similar classifications. In addition, we discuss the performance of the approach for dissimilar spatial windows, hence various dimensions of the above-mentioned matrices, and we do so in order to find the one which offers the most adequate classification. The numerical experiment is carried out using the data acquired during a dedicated observational UAV campaign carried out in the Izerskie Mountains (SW Poland).
NASA Astrophysics Data System (ADS)
Griesinger, Uwe A.; Dettmann, Wolfgang; Hennig, Mario; Heumann, Jan P.; Koehle, Roderick; Ludwig, Ralf; Verbeek, Martin; Zarrabian, Mardjan
2002-07-01
In optical lithography balancing the aerial image of an alternating phase shifting mask (alt. PSM) is a major challenge. For the exposure wavelengths (currently 248nm and 193nm) an optimum etching method is necessary to overcome imbalance effects. Defects play an important role in the imbalances of the aerial image. In this contribution defects will be discussed by using the methodology of global phase imbalance control also for local imbalances which are a result of quartz defects. The effective phase error can be determined with an AIMS-system by measuring the CD width between the images of deep- and shallow trenches at different focus settings. The AIMS results are analyzed in comparison to the simulated and lithographic print results of the alternating structures. For the analysis of local aerial image imbalances it is necessary to investigate the capability of detecting these phase defects with state of the art inspection systems. Alternating PSMs containing programmed defects were inspected with different algorithms to investigate the capture rate of special phase defects in dependence on the defect size. Besides inspection also repair of phase defects is an important task. In this contribution we show the effect of repair on the optical behavior of phase defects. Due to the limited accuracy of the repair tools the repaired area still shows a certain local phase error. This error can be caused either by residual quartz material or a substrate damage. The influence of such repair induced phase errors on the aerial image were investigated.
Image feature based GPS trace filtering for road network generation and road segmentation
Yuan, Jiangye; Cheriyadat, Anil M.
2015-10-19
We propose a new method to infer road networks from GPS trace data and accurately segment road regions in high-resolution aerial images. Unlike previous efforts that rely on GPS traces alone, we exploit image features to infer road networks from noisy trace data. The inferred road network is used to guide road segmentation. We show that the number of image segments spanned by the traces and the trace orientation validated with image features are important attributes for identifying GPS traces on road regions. Based on filtered traces , we construct road networks and integrate them with image features to segmentmore » road regions. Lastly, our experiments show that the proposed method produces more accurate road networks than the leading method that uses GPS traces alone, and also achieves high accuracy in segmenting road regions even with very noisy GPS data.« less
Image feature based GPS trace filtering for road network generation and road segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Jiangye; Cheriyadat, Anil M.
We propose a new method to infer road networks from GPS trace data and accurately segment road regions in high-resolution aerial images. Unlike previous efforts that rely on GPS traces alone, we exploit image features to infer road networks from noisy trace data. The inferred road network is used to guide road segmentation. We show that the number of image segments spanned by the traces and the trace orientation validated with image features are important attributes for identifying GPS traces on road regions. Based on filtered traces , we construct road networks and integrate them with image features to segmentmore » road regions. Lastly, our experiments show that the proposed method produces more accurate road networks than the leading method that uses GPS traces alone, and also achieves high accuracy in segmenting road regions even with very noisy GPS data.« less
Alacid, Beatriz
2018-01-01
This work presents a method for oil-spill detection on Spanish coasts using aerial Side-Looking Airborne Radar (SLAR) images, which are captured using a Terma sensor. The proposed method uses grayscale image processing techniques to identify the dark spots that represent oil slicks on the sea. The approach is based on two steps. First, the noise regions caused by aircraft movements are detected and labeled in order to avoid the detection of false-positives. Second, a segmentation process guided by a map saliency technique is used to detect image regions that represent oil slicks. The results show that the proposed method is an improvement on the previous approaches for this task when employing SLAR images. PMID:29316716
Use of archive aerial photography for monitoring black mangrove populations
USDA-ARS?s Scientific Manuscript database
A study was conducted on the south Texas Gulf Coast to evaluate archive aerial color-infrared (CIR) photography combined with supervised image analysis techniques to quantify changes in black mangrove [Avicennia germinans (L.) L.] populations over a 26-year period. Archive CIR film from two study si...
Interpretation and mapping of gypsy moth defoilation from ERTS (LANDSAT)-1 temporal composites
NASA Technical Reports Server (NTRS)
Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator); Kowalik, W. S.
1975-01-01
The author has identified the following significant results. Photointerpretation of temporally composited color Diazo transparencies of ERTS(LANDSAT) images is a practical method for detecting and locating levels of widespread defoliation. ERTS 9 x 9 inch images are essentially orthographic and are produced at a nearly constant 1:1,000,000 scale. This allows direct superposition of scenes for temporal composites. ERTS coverage provides a sweeping 180 km (110 mile) wide view, permitting one interpreter to rapidly delineate defoliation in an area requiring days and weeks of work by aerial surveys or computerized processing. Defoliation boundaries can be located on the images within maximum errors on the order of hundreds of meters. The enhancement process is much less expensive than aerial surveys or computerized processing. Maps produced directly from interpretation are manageable working products. The 18 day periodic coverage of ERTS is not frequent enough to replace aerial survey mapping because defoliation and refoliation move as waves.
Ortiz, Alberto; Bonnin-Pascual, Francisco; Garcia-Fidalgo, Emilio; Company-Corcoles, Joan P.
2016-01-01
Vessel maintenance requires periodic visual inspection of the hull in order to detect typical defective situations of steel structures such as, among others, coating breakdown and corrosion. These inspections are typically performed by well-trained surveyors at great cost because of the need for providing access means (e.g., scaffolding and/or cherry pickers) that allow the inspector to be at arm’s reach from the structure under inspection. This paper describes a defect detection approach comprising a micro-aerial vehicle which is used to collect images from the surfaces under inspection, particularly focusing on remote areas where the surveyor has no visual access, and a coating breakdown/corrosion detector based on a three-layer feed-forward artificial neural network. As it is discussed in the paper, the success of the inspection process depends not only on the defect detection software but also on a number of assistance functions provided by the control architecture of the aerial platform, whose aim is to improve picture quality. Both aspects of the work are described along the different sections of the paper, as well as the classification performance attained. PMID:27983627
Ortiz, Alberto; Bonnin-Pascual, Francisco; Garcia-Fidalgo, Emilio; Company-Corcoles, Joan P
2016-12-14
Vessel maintenance requires periodic visual inspection of the hull in order to detect typical defective situations of steel structures such as, among others, coating breakdown and corrosion. These inspections are typically performed by well-trained surveyors at great cost because of the need for providing access means (e.g., scaffolding and/or cherry pickers) that allow the inspector to be at arm's reach from the structure under inspection. This paper describes a defect detection approach comprising a micro-aerial vehicle which is used to collect images from the surfaces under inspection, particularly focusing on remote areas where the surveyor has no visual access, and a coating breakdown/corrosion detector based on a three-layer feed-forward artificial neural network. As it is discussed in the paper, the success of the inspection process depends not only on the defect detection software but also on a number of assistance functions provided by the control architecture of the aerial platform, whose aim is to improve picture quality. Both aspects of the work are described along the different sections of the paper, as well as the classification performance attained.
NASA Astrophysics Data System (ADS)
Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.
2015-08-01
Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.
3D Tree Dimensionality Assessment Using Photogrammetry and Small Unmanned Aerial Vehicles
2015-01-01
Detailed, precise, three-dimensional (3D) representations of individual trees are a prerequisite for an accurate assessment of tree competition, growth, and morphological plasticity. Until recently, our ability to measure the dimensionality, spatial arrangement, shape of trees, and shape of tree components with precision has been constrained by technological and logistical limitations and cost. Traditional methods of forest biometrics provide only partial measurements and are labor intensive. Active remote technologies such as LiDAR operated from airborne platforms provide only partial crown reconstructions. The use of terrestrial LiDAR is laborious, has portability limitations and high cost. In this work we capitalized on recent improvements in the capabilities and availability of small unmanned aerial vehicles (UAVs), light and inexpensive cameras, and developed an affordable method for obtaining precise and comprehensive 3D models of trees and small groups of trees. The method employs slow-moving UAVs that acquire images along predefined trajectories near and around targeted trees, and computer vision-based approaches that process the images to obtain detailed tree reconstructions. After we confirmed the potential of the methodology via simulation we evaluated several UAV platforms, strategies for image acquisition, and image processing algorithms. We present an original, step-by-step workflow which utilizes open source programs and original software. We anticipate that future development and applications of our method will improve our understanding of forest self-organization emerging from the competition among trees, and will lead to a refined generation of individual-tree-based forest models. PMID:26393926
Open set recognition of aircraft in aerial imagery using synthetic template models
NASA Astrophysics Data System (ADS)
Bapst, Aleksander B.; Tran, Jonathan; Koch, Mark W.; Moya, Mary M.; Swahn, Robert
2017-05-01
Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.
3D Tree Dimensionality Assessment Using Photogrammetry and Small Unmanned Aerial Vehicles.
Gatziolis, Demetrios; Lienard, Jean F; Vogs, Andre; Strigul, Nikolay S
2015-01-01
Detailed, precise, three-dimensional (3D) representations of individual trees are a prerequisite for an accurate assessment of tree competition, growth, and morphological plasticity. Until recently, our ability to measure the dimensionality, spatial arrangement, shape of trees, and shape of tree components with precision has been constrained by technological and logistical limitations and cost. Traditional methods of forest biometrics provide only partial measurements and are labor intensive. Active remote technologies such as LiDAR operated from airborne platforms provide only partial crown reconstructions. The use of terrestrial LiDAR is laborious, has portability limitations and high cost. In this work we capitalized on recent improvements in the capabilities and availability of small unmanned aerial vehicles (UAVs), light and inexpensive cameras, and developed an affordable method for obtaining precise and comprehensive 3D models of trees and small groups of trees. The method employs slow-moving UAVs that acquire images along predefined trajectories near and around targeted trees, and computer vision-based approaches that process the images to obtain detailed tree reconstructions. After we confirmed the potential of the methodology via simulation we evaluated several UAV platforms, strategies for image acquisition, and image processing algorithms. We present an original, step-by-step workflow which utilizes open source programs and original software. We anticipate that future development and applications of our method will improve our understanding of forest self-organization emerging from the competition among trees, and will lead to a refined generation of individual-tree-based forest models.
Positional Quality Assessment of Orthophotos Obtained from Sensors Onboard Multi-Rotor UAV Platforms
Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer
2014-01-01
In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart. PMID:25587877
Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer
2014-11-26
In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart.
Usability of aerial video footage for 3-D scene reconstruction and structural damage assessment
NASA Astrophysics Data System (ADS)
Cusicanqui, Johnny; Kerle, Norman; Nex, Francesco
2018-06-01
Remote sensing has evolved into the most efficient approach to assess post-disaster structural damage, in extensively affected areas through the use of spaceborne data. For smaller, and in particular, complex urban disaster scenes, multi-perspective aerial imagery obtained with unmanned aerial vehicles and derived dense color 3-D models are increasingly being used. These type of data allow the direct and automated recognition of damage-related features, supporting an effective post-disaster structural damage assessment. However, the rapid collection and sharing of multi-perspective aerial imagery is still limited due to tight or lacking regulations and legal frameworks. A potential alternative is aerial video footage, which is typically acquired and shared by civil protection institutions or news media and which tends to be the first type of airborne data available. Nevertheless, inherent artifacts and the lack of suitable processing means have long limited its potential use in structural damage assessment and other post-disaster activities. In this research the usability of modern aerial video data was evaluated based on a comparative quality and application analysis of video data and multi-perspective imagery (photos), and their derivative 3-D point clouds created using current photogrammetric techniques. Additionally, the effects of external factors, such as topography and the presence of smoke and moving objects, were determined by analyzing two different earthquake-affected sites: Tainan (Taiwan) and Pescara del Tronto (Italy). Results demonstrated similar usabilities for video and photos. This is shown by the short 2 cm of difference between the accuracies of video- and photo-based 3-D point clouds. Despite the low video resolution, the usability of these data was compensated for by a small ground sampling distance. Instead of video characteristics, low quality and application resulted from non-data-related factors, such as changes in the scene, lack of texture, or moving objects. We conclude that not only are current video data more rapidly available than photos, but they also have a comparable ability to assist in image-based structural damage assessment and other post-disaster activities.
NASA Astrophysics Data System (ADS)
Frankl, Amaury; Stal, Cornelis; De Wit, Bart; De Wulf, Alain; Salvador, Pierre-Gil; Nyssen, Jan
2014-05-01
In erosion studies, accurate spatio-temporal data are required to fully understand the processes involved and their relationship with environmental controls. With cameras being mounted on Unmanned Aerial Vehicles (UAVs), the latter allow to collect low-altitude aerial photographs over small catchments in a cost-effective and rapid way. From large data sets of overlapping aerial photographs, Structure from Motion - Multi View Stereo workflows, integrated in various software such as PhotoScan used here, allow to produced detailed Digital Surface Models (DSMs) and ortho-mosaics. In this study we present the results from a survey carried out in a small agricultural catchment near Hallines, in Northern France. A DSM and ortho-mosaic was produced of the catchment using photographs taken from a low-cost radio-controlled microdrone (DroneFlyer Hexacopter). Photographs were taken with a Sony Nex 5 (16.1 M pixels) camera having a fixed normal lens of 50 mm. In the field, Ground Control Points were materialized by unambiguously determinable targets, measured with a 1'' total station (Leica TS15i). Cross-sections of rills and ephemeral gullies were also quantified from total station measurements and from terrestrial image-based 3D modelling. These data allowed to define the accuracy of the DSM and the representation of the erosion features in it. The feasibility of UAVs photographic surveys to improve our understanding on water-erosion processes such as sheet, rill and gully erosion is discussed. Keywords: Ephemeral gully, Erosion study, Image-based 3D modelling, Microdrone, Rill, UAVs.
NASA Astrophysics Data System (ADS)
Jende, Phillipp; Nex, Francesco; Gerke, Markus; Vosselman, George
2018-07-01
Mobile Mapping (MM) solutions have become a significant extension to traditional data acquisition methods over the last years. Independently from the sensor carried by a platform, may it be laser scanners or cameras, high-resolution data postings are opposing a poor absolute localisation accuracy in urban areas due to GNSS occlusions and multipath effects. Potentially inaccurate position estimations are propagated by IMUs which are furthermore prone to drift effects. Thus, reliable and accurate absolute positioning on a par with MM's high-quality data remains an open issue. Multiple and diverse approaches have shown promising potential to mitigate GNSS errors in urban areas, but cannot achieve decimetre accuracy, require manual effort, or have limitations with respect to costs and availability. This paper presents a fully automatic approach to support the correction of MM imaging data based on correspondences with airborne nadir images. These correspondences can be employed to correct the MM platform's orientation by an adjustment solution. Unlike MM as such, aerial images do not suffer from GNSS occlusions, and their accuracy is usually verified by employing well-established methods using ground control points. However, a registration between MM and aerial images is a non-standard matching scenario, and requires several strategies to yield reliable and accurate correspondences. Scale, perspective and content strongly vary between both image sources, thus traditional feature matching methods may fail. To this end, the registration process is designed to focus on common and clearly distinguishable elements, such as road markings, manholes, or kerbstones. With a registration accuracy of about 98%, reliable tie information between MM and aerial data can be derived. Even though, the adjustment strategy is not covered in its entirety in this paper, accuracy results after adjustment will be presented. It will be shown that a decimetre accuracy is well achievable in a real data test scenario.
Automatic extraction of tree crowns from aerial imagery in urban environment
NASA Astrophysics Data System (ADS)
Liu, Jiahang; Li, Deren; Qin, Xunwen; Yang, Jianfeng
2006-10-01
Traditionally, field-based investigation is the main method to investigate greenbelt in urban environment, which is costly and low updating frequency. In higher resolution image, the imagery structure and texture of tree canopy has great similarity in statistics despite the great difference in configurations of tree canopy, and their surface structures and textures of tree crown are very different from the other types. In this paper, we present an automatic method to detect tree crowns using high resolution image in urban environment without any apriori knowledge. Our method catches unique structure and texture of tree crown surface, use variance and mathematical expectation of defined image window to position the candidate canopy blocks coarsely, then analysis their inner structure and texture to refine these candidate blocks. The possible spans of all the feature parameters used in our method automatically generate from the small number of samples, and HOLE and its distribution as an important characteristics are introduced into refining processing. Also the isotropy of candidate image block and holes' distribution is integrated in our method. After introduction the theory of our method, aerial imageries were used ( with a resolution about 0.3m ) to test our method, and the results indicate that our method is an effective approach to automatically detect tree crown in urban environment.
Research on Aircraft Target Detection Algorithm Based on Improved Radial Gradient Transformation
NASA Astrophysics Data System (ADS)
Zhao, Z. M.; Gao, X. M.; Jiang, D. N.; Zhang, Y. Q.
2018-04-01
Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied, and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.
NASA Astrophysics Data System (ADS)
Wiegand, C.; Geitner, C.; Heinrich, K.; Rutzinger, M.
2012-04-01
Small and shallow eroded areas characterize the landscape of many pastures and meadows in the Alps. The extent of such erosion phenomena varies between 2 m2 and 200 m2. These patches tend to be only a few decimetres thick, with a maximum depth of 2 m. The processes involved are shallow landslides, superficial erosion by snow and livestock trampling. Key parameters that influence the emergence of shallow erosion are the geological, topographical and climatic circumstances in an area as well as its soils, vegetation and land use. The negative impact of this phenomenon includes not only the loss of soil but also the reduced attractiveness of the landscape, especially in tourist regions. One approach identifying and mapping geomorphological elements is remote sensing. The analysis of aerial images is a suitable method for identifying the multi-temporal dynamics of shallow eroded areas because of the good spatial and temporal resolution. For this purpose, we used a pixel-based approach to detect these areas semi-automatically in an orthophoto. In a first step, each aerial image was classified using dynamic thresholds derived from the histogram of the orthophoto. In a second step, the identified areas of erosion were filtered and visually in-terpreted. Based on this procedure, eroded areas with a minimum size of 5 m2 were detected in a test site located in the Inner Schmirn Valley (Tyrol, Austria). The altitude of the test site ranges between 1,980 m and 2,370 m, with a mean inclination of 36°, facing E to SE. Geologically, the slope is part of the "Hohe Tauern Window", characterized by "Bündner schists" deficient in lime and regolith. Until the 1960s, the slope was used as a hay meadow. Orthophotos from 2000, 2003, 2007 and 2010 were used for this investigation. Older aerial images were not suitable because of their lower resolution and poor ortho-rectification. However, they are useful for relating the results of the ten-year time-span to a larger temporal context. No significant increase of erosion could be observed for the investigated ten-year period. The majority of the eroded areas show no distinct trend but rather an irregular pattern of increase and decrease. The results fit well in a larger temporal context: in aerial images of the 1950s, the slope already shows several eroded patches, which did not change until the year 2000. The owners also confirm that erosion was even a problem before abandonment. In this case, the inclination of the terrain seems to exceed the influence of land-use activities. With the semi-automated detection of such eroded areas, a more objective and time-saving method was found. The results contribute to an improved understanding of the process and can initiate a long-term observation. In subsequent studies we will apply the approach to further test sites and adapt it for the detection of smaller eroded areas.
NASA Astrophysics Data System (ADS)
Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George
2018-06-01
Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.
Production in Gdlrc and Present Reflections
NASA Astrophysics Data System (ADS)
Kisa, A.; Çolak, S.; Bakici, S.; Özmüş, L.
2013-08-01
Recently, Turkey's National Geographic Information System (TNGIS) carried out a more comprehensive studies. General Directorate of Land Registry and Cadastre (GDLRC) within the scope of these studies has projects in many areas within the jurisdiction. GDLRC have started Land Registry and Cadastre Modernization Project (LRCMP) in 2008 and still continues this project. The current project is very successful in the renewal and transfer of digital media, after the completion of the country's digital cadastre. The scope of this project was prepared comprehensive study. The scope of this project was prepared comprehensive study. This studies; human resources development, new cadastre offices renovation and its services improvement, examining and reporting of the valuation of real estate, renovation and updating of the cadastre. All works continues at the same speed and determination. With these developments, GDLRC works with institutions, organizations and citizens. These developments cause a further increase interoperability and trust relationships. GDLRC, across the country, produces, stores, manages and preservesof property information. GDLRC the use and development of real estate is an important way to the work. In this context, one of the layers of spatial information systems, an essential requirement of the images which obtained by means of remote sensing satellite photos and/or consist of aerial photographs, is quite an important role. In research, in order to meet the common needs of different institutions and organizations, aerial photographs and orthophoto imagery are needed. Aerial photographs more up to date, precise, clear and reliable. GDLRC, signing of important projects, is working to implement the Orthophoto Information System (OIS) project. GDLRC equipped with a new photogrammetric systems in 2009. In this way, the technological advances in the industry leading on this issue very closely monitored and carried out the task successfully. 200 000 km2 area in the country, 1/5000 scaled digital color orthophoto images were producedbetween 2009-2012. Orthophoto images of other areas is carried out by General Command of Mapping (GCM). Orthophotos , base images with the current covering the whole country, cloudless, 30-45 cm in the sampling interval (Ground Sample Distance - GSD), produced will be realized by both institutions in 2014. Ongoing projects in the fields covered by the renewal is important. Orthophoto production stage, stereo, color, and near-infrared aerial photographs of the terrain elevation models are also available. Municipalities places by 1/1000 scale orthophoto images prepared in this process. The production of orthophotos, digital cadastral works of engineering projects and other institutions, decision-support processes, quality controls and integrity of the legal dimension of the feature can be used to create litter. GDLRC for these purposes by the OGC Web Services standards TNGIS prepared and successfully created the image layer. Two projects planned by GDLRC. In both of projects, historical aerial photographs, are retrieved from GDLRC and GCM archives, will scan, produce orthophoto and service from web.
NASA Astrophysics Data System (ADS)
Aslett, Zan; Taranik, James V.; Riley, Dean N.
2018-02-01
Aerial spatially enhanced broadband array spectrograph system (SEBASS) long-wave infrared (LWIR) hyperspectral image data were used to map the distribution of rock-forming minerals indicative of sedimentary and meta-sedimentary lithologies around Boundary Canyon, Death Valley, California, USA. Collection of data over the Boundary Canyon detachment fault (BCDF) facilitated measurement of numerous lithologies representing a contact between the relatively unmetamorphosed Grapevine Mountains allochthon and the metamorphosed core complex of the Funeral Mountains autochthon. These included quartz-rich sandstone, quartzite, conglomerate, and alluvium; muscovite-rich schist, siltstone, and slate; and carbonate-rich dolomite, limestone, and marble, ranging in age from late Precambrian to Quaternary. Hyperspectral data were reduced in dimensionality and processed to statistically identify and map unique emissivity spectra endmembers. Some minerals (e.g., quartz and muscovite) dominate multiple lithologies, resulting in a limited ability to differentiate them. Abrupt variations in image data emissivity amongst pelitic schists corresponded to amphibolite; these rocks represent gradation from greenschist- to amphibolite-metamorphic facies lithologies. Although the full potential of LWIR hyperspectral image data may not be fully utilized within this study area due to lack of measurable spectral distinction between rocks of similar bulk mineralogy, the high spectral resolution of the image data was useful in characterizing silicate- and carbonate-based sedimentary and meta-sedimentary rocks in proximity to fault contacts, as well as for interpreting some mineral mixtures.
Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images
Ortega-Terol, Damian; Ballesteros, Rocio
2017-01-01
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology. PMID:29036930
Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images.
Ortega-Terol, Damian; Hernandez-Lopez, David; Ballesteros, Rocio; Gonzalez-Aguilera, Diego
2017-10-15
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology.
Remote Sensing Soil Moisture Analysis by Unmanned Aerial Vehicles Digital Imaging
NASA Astrophysics Data System (ADS)
Yeh, C. Y.; Lin, H. R.; Chen, Y. L.; Huang, S. Y.; Wen, J. C.
2017-12-01
In recent years, remote sensing analysis has been able to apply to the research of climate change, environment monitoring, geology, hydro-meteorological, and so on. However, the traditional methods for analyzing wide ranges of surface soil moisture of spatial distribution surveys may require plenty resources besides the high cost. In the past, remote sensing analysis performed soil moisture estimates through shortwave, thermal infrared ray, or infrared satellite, which requires lots of resources, labor, and money. Therefore, the digital image color was used to establish the multiple linear regression model. Finally, we can find out the relationship between surface soil color and soil moisture. In this study, we use the Unmanned Aerial Vehicle (UAV) to take an aerial photo of the fallow farmland. Simultaneously, we take the surface soil sample from 0-5 cm of the surface. The soil will be baking by 110° C and 24 hr. And the software ImageJ 1.48 is applied for the analysis of the digital images and the hue analysis into Red, Green, and Blue (R, G, B) hue values. The correlation analysis is the result from the data obtained from the image hue and the surface soil moisture at each sampling point. After image and soil moisture analysis, we use the R, G, B and soil moisture to establish the multiple regression to estimate the spatial distributions of surface soil moisture. In the result, we compare the real soil moisture and the estimated soil moisture. The coefficient of determination (R2) can achieve 0.5-0.7. The uncertainties in the field test, such as the sun illumination, the sun exposure angle, even the shadow, will affect the result; therefore, R2 can achieve 0.5-0.7 reflects good effect for the in-suit test by using the digital image to estimate the soil moisture. Based on the outcomes of the research, using digital images from UAV to estimate the surface soil moisture is acceptable. However, further investigations need to be collected more than ten days (four times a day) data to verify the relation between the image hue and the soil moisture for reliable moisture estimated model. And it is better to use the digital single lens reflex camera to prevent the deformation of the image and to have a better auto exposure. Keywords: soil, moisture, remote sensing
Feasibility of Using Synthetic Aperture Radar to Aid UAV Navigation
Nitti, Davide O.; Bovenga, Fabio; Chiaradia, Maria T.; Greco, Mario; Pinelli, Gianpaolo
2015-01-01
This study explores the potential of Synthetic Aperture Radar (SAR) to aid Unmanned Aerial Vehicle (UAV) navigation when Inertial Navigation System (INS) measurements are not accurate enough to eliminate drifts from a planned trajectory. This problem can affect medium-altitude long-endurance (MALE) UAV class, which permits heavy and wide payloads (as required by SAR) and flights for thousands of kilometres accumulating large drifts. The basic idea is to infer position and attitude of an aerial platform by inspecting both amplitude and phase of SAR images acquired onboard. For the amplitude-based approach, the system navigation corrections are obtained by matching the actual coordinates of ground landmarks with those automatically extracted from the SAR image. When the use of SAR amplitude is unfeasible, the phase content can be exploited through SAR interferometry by using a reference Digital Terrain Model (DTM). A feasibility analysis was carried out to derive system requirements by exploring both radiometric and geometric parameters of the acquisition setting. We showed that MALE UAV, specific commercial navigation sensors and SAR systems, typical landmark position accuracy and classes, and available DTMs lead to estimate UAV coordinates with errors bounded within ±12 m, thus making feasible the proposed SAR-based backup system. PMID:26225977
Feasibility of Using Synthetic Aperture Radar to Aid UAV Navigation.
Nitti, Davide O; Bovenga, Fabio; Chiaradia, Maria T; Greco, Mario; Pinelli, Gianpaolo
2015-07-28
This study explores the potential of Synthetic Aperture Radar (SAR) to aid Unmanned Aerial Vehicle (UAV) navigation when Inertial Navigation System (INS) measurements are not accurate enough to eliminate drifts from a planned trajectory. This problem can affect medium-altitude long-endurance (MALE) UAV class, which permits heavy and wide payloads (as required by SAR) and flights for thousands of kilometres accumulating large drifts. The basic idea is to infer position and attitude of an aerial platform by inspecting both amplitude and phase of SAR images acquired onboard. For the amplitude-based approach, the system navigation corrections are obtained by matching the actual coordinates of ground landmarks with those automatically extracted from the SAR image. When the use of SAR amplitude is unfeasible, the phase content can be exploited through SAR interferometry by using a reference Digital Terrain Model (DTM). A feasibility analysis was carried out to derive system requirements by exploring both radiometric and geometric parameters of the acquisition setting. We showed that MALE UAV, specific commercial navigation sensors and SAR systems, typical landmark position accuracy and classes, and available DTMs lead to estimated UAV coordinates with errors bounded within ±12 m, thus making feasible the proposed SAR-based backup system.
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.
2012-01-01
As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.
UCXp camera imaging principle and key technologies of data post-processing
NASA Astrophysics Data System (ADS)
Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao
2014-03-01
The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.
Rieucau, G; Kiszka, J J; Castillo, J C; Mourier, J; Boswell, K M; Heithaus, M R
2018-06-01
A novel image analysis-based technique applied to unmanned aerial vehicle (UAV) survey data is described to detect and locate individual free-ranging sharks within aggregations. The method allows rapid collection of data and quantification of fine-scale swimming and collective patterns of sharks. We demonstrate the usefulness of this technique in a small-scale case study exploring the shoaling tendencies of blacktip reef sharks Carcharhinus melanopterus in a large lagoon within Moorea, French Polynesia. Using our approach, we found that C. melanopterus displayed increased alignment with shoal companions when distributed over a sandflat where they are regularly fed for ecotourism purposes as compared with when they shoaled in a deeper adjacent channel. Our case study highlights the potential of a relatively low-cost method that combines UAV survey data and image analysis to detect differences in shoaling patterns of free-ranging sharks in shallow habitats. This approach offers an alternative to current techniques commonly used in controlled settings that require time-consuming post-processing effort. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong
2016-01-01
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351
Subar, Amy F; Crafts, Jennifer; Zimmerman, Thea Palmer; Wilson, Michael; Mittl, Beth; Islam, Noemi G; McNutt, Suzanne; Potischman, Nancy; Buday, Richard; Hull, Stephen G; Baranowski, Tom; Guenther, Patricia M; Willis, Gordon; Tapia, Ramsey; Thompson, Frances E
2010-01-01
To assess the accuracy of portion-size estimates and participant preferences using various presentations of digital images. Two observational feeding studies were conducted. In both, each participant selected and consumed foods for breakfast and lunch, buffet style, serving themselves portions of nine foods representing five forms (eg, amorphous, pieces). Serving containers were weighed unobtrusively before and after selection as was plate waste. The next day, participants used a computer software program to select photographs representing portion sizes of foods consumed the previous day. Preference information was also collected. In Study 1 (n=29), participants were presented with four different types of images (aerial photographs, angled photographs, images of mounds, and household measures) and two types of screen presentations (simultaneous images vs an empty plate that filled with images of food portions when clicked). In Study 2 (n=20), images were presented in two ways that varied by size (large vs small) and number (4 vs 8). Convenience sample of volunteers of varying background in an office setting. Repeated-measures analysis of variance of absolute differences between actual and reported portions sizes by presentation methods. Accuracy results were largely not statistically significant, indicating that no one image type was most accurate. Accuracy results indicated the use of eight vs four images was more accurate. Strong participant preferences supported presenting simultaneous vs sequential images. These findings support the use of aerial photographs in the automated self-administered 24-hour recall. For some food forms, images of mounds or household measures are as accurate as images of food and, therefore, are a cost-effective alternative to photographs of foods. Copyright 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Subar, Amy F.; Crafts, Jennifer; Zimmerman, Thea Palmer; Wilson, Michael; Mittl, Beth; Islam, Noemi G.; Mcnutt, Suzanne; Potischman, Nancy; Buday, Richard; Hull, Stephen G.; Baranowski, Tom; Guenther, Patricia M.; Willis, Gordon; Tapia, Ramsey; Thompson, Frances E.
2013-01-01
Objective To assess the accuracy of portion-size estimates and participant preferences using various presentations of digital images. Design Two observational feeding studies were conducted. In both, each participant selected and consumed foods for breakfast and lunch, buffet style, serving themselves portions of nine foods representing five forms (eg, amorphous, pieces). Serving containers were weighed unobtrusively before and after selection as was plate waste. The next day, participants used a computer software program to select photographs representing portion sizes of foods consumed the previous day. Preference information was also collected. In Study 1 (n=29), participants were presented with four different types of images (aerial photographs, angled photographs, images of mounds, and household measures) and two types of screen presentations (simultaneous images vs an empty plate that filled with images of food portions when clicked). In Study 2 (n=20), images were presented in two ways that varied by size (large vs small) and number (4 vs 8). Subjects/setting Convenience sample of volunteers of varying background in an office setting. Statistical analyses performed Repeated-measures analysis of variance of absolute differences between actual and reported portions sizes by presentation methods. Results Accuracy results were largely not statistically significant, indicating that no one image type was most accurate. Accuracy results indicated the use of eight vs four images was more accurate. Strong participant preferences supported presenting simultaneous vs sequential images. Conclusions These findings support the use of aerial photographs in the automated self-administered 24-hour recall. For some food forms, images of mounds or household measures are as accurate as images of food and, therefore, are a cost-effective alternative to photographs of foods. PMID:20102828
Use of Aerial Hyperspectral Imaging For Monitoring Forest Health
Milton O. Smith; Nolan J. Hess; Stephen Gulick; Lori G. Eckhardt; Roger D. Menard
2004-01-01
This project evaluates the effectiveness of aerial hyperspectral digital imagery in the assessment of forest health of loblolly stands in central Alabama. The imagery covers 50 square miles, in Bibb and Hale Counties, south of Tuscaloosa, AL, which includes intensive managed forest industry sites and National Forest lands with multiple use objectives. Loblolly stands...
Very Large Scale Aerial (VLSA) imagery for assessing postfire bitterbrush recovery
Corey A. Moffet; J. Bret Taylor; D. Terrance Booth
2008-01-01
Very large scale aerial (VLSA) imagery is an efficient tool for monitoring bare ground and cover on extensive rangelands. This study was conducted to determine whether VLSA images could be used to detect differences in antelope bitterbrush (Purshia tridentata Pursh DC) cover and density among similar ecological sites with varying postfire recovery...
Remote sensing and image interpretation
NASA Technical Reports Server (NTRS)
Lillesand, T. M.; Kiefer, R. W. (Principal Investigator)
1979-01-01
A textbook prepared primarily for use in introductory courses in remote sensing is presented. Topics covered include concepts and foundations of remote sensing; elements of photographic systems; introduction to airphoto interpretation; airphoto interpretation for terrain evaluation; photogrammetry; radiometric characteristics of aerial photographs; aerial thermography; multispectral scanning and spectral pattern recognition; microwave sensing; and remote sensing from space.
Correction of Line Interleaving Displacement in Frame Captured Aerial Video Imagery
B. Cooke; A. Saucier
1995-01-01
Scientists with the USDA Forest Service are currently assessing the usefulness of aerial video imagery for various purposes including midcycle inventory updates. The potential of video image data for these purposes may be compromised by scan line interleaving displacement problems. Interleaving displacement problems cause features in video raster datasets to have...
Estimation of the sugar cane cultivated area from LANDSAT images using the two phase sampling method
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Cappelletti, C. A.; Mendonca, F. J.; Lee, D. C. L.; Shimabukuro, Y. E.
1982-01-01
A two phase sampling method and the optimal sampling segment dimensions for the estimation of sugar cane cultivated area were developed. This technique employs visual interpretations of LANDSAT images and panchromatic aerial photographs considered as the ground truth. The estimates, as a mean value of 100 simulated samples, represent 99.3% of the true value with a CV of approximately 1%; the relative efficiency of the two phase design was 157% when compared with a one phase aerial photographs sample.
Mapping lava flow textures using three-dimensional measures of surface roughness
NASA Astrophysics Data System (ADS)
Mallonee, H. C.; Kobs-Nawotniak, S. E.; McGregor, M.; Hughes, S. S.; Neish, C.; Downs, M.; Delparte, D.; Lim, D. S. S.; Heldmann, J. L.
2016-12-01
Lava flow emplacement conditions are reflected in the surface textures of a lava flow; unravelling these conditions is crucial to understanding the eruptive history and characteristics of basaltic volcanoes. Mapping lava flow textures using visual imagery alone is an inherently subjective process, as these images generally lack the resolution needed to make these determinations. Our team has begun mapping lava flow textures using visual spectrum imagery, which is an inherently subjective process involving the challenge of identifying transitional textures such as rubbly and slabby pāhoehoe, as these textures are similar in appearance and defined qualitatively. This is particularly problematic for interpreting planetary lava flow textures, where we have more limited data. We present a tool to objectively classify lava flow textures based on quantitative measures of roughness, including the 2D Hurst exponent, RMS height, and 2D:3D surface area ratio. We collected aerial images at Craters of the Moon National Monument (COTM) using Unmanned Aerial Vehicles (UAVs) in 2015 and 2016 as part of the FINESSE (Field Investigations to Enable Solar System Science and Exploration) and BASALT (Biologic Analog Science Associated with Lava Terrains) research projects. The aerial images were stitched together to create Digital Terrain Models (DTMs) with resolutions on the order of centimeters. The DTMs were evaluated by the classification tool described above, with output compared against field assessment of the texture. Further, the DTMs were downsampled and reevaluated to assess the efficacy of the classification tool at data resolutions similar to current datasets from other planetary bodies. This tool allows objective classification of lava flow texture, which enables more accurate interpretations of flow characteristics. This work also gives context for interpretations of flows with comparatively low data resolutions, such as those on the Moon and Mars. Textural maps based on quantitative measures of roughness are a valuable asset for studies of lava flows on Earth and other planetary bodies.
NASA Astrophysics Data System (ADS)
Scheidt, S. P.; Whelley, P.; Hamilton, C.; Bleacher, J. E.; Garry, W. B.
2015-12-01
The December 31, 1974 lava flow from Kilauea Caldera, Hawaii within the Hawaii Volcanoes National Park was selected for field campaigns as a terrestrial analog for Mars in support of NASA Planetary Geology and Geophysics (PGG) research and the Remote, In Situ and Synchrotron Studies for Science and Exploration (RIS4E) node of the Solar System Exploration Research Virtual Institute (SSERVI) program). The lava flow was a rapidly emplaced unit that was strongly influenced by existing topography, which favored the formation of a tributary lava flow system. The unit includes a diverse range of surface textures (e.g., pāhoehoe, ´áā, and transitional lavas), and structural features (e.g., streamlined islands, pits, and interactions with older tumuli). However, these features are generally below the threshold of visibility within previously acquired airborne and spacecraft data. In this study, we have generated unique, high-resolution digital images using low-altitude Kite Aerial Photography (KAP) system during field campaigns in 2014 and 2015 (National Park Service permit #HAVO-2012-SCI-0025). The kite-based mapping platform (nadir-viewing) and a radio-controlled gimbal (allowing pointing) provided similar data as from an unmanned aerial vehicle (UAV), but with longer flight time, larger total data volumes per sortie, and fewer regulatory challenges and cost. Images acquired from KAP and UAVs are used to create orthomosaics and DEMs using Multi-View Stereo-Photogrammetry (MVSP) software. The 3-Dimensional point clouds are extremely dense, resulting in a grid resolution of < 2 cm. Airborne Light Detection and Ranging (LiDAR) / Terrestrial Laser Scanning (TLS) data have been collected for these areas and provide a basis of comparison or "ground truth" for the photogrammetric data. Our results show a good comparison with LiDAR/TLS data, each offering their own unique advantages and potential for data fusion.
The use of unmanned aerial vehicle imagery in intertidal monitoring
NASA Astrophysics Data System (ADS)
Konar, Brenda; Iken, Katrin
2018-01-01
Intertidal monitoring projects are often limited in their practicality because traditional methods such as visual surveys or removal of biota are often limited in the spatial extent for which data can be collected. Here, we used imagery from a small unmanned aerial vehicle (sUAV) to test their potential use in rocky intertidal and intertidal seagrass surveys in the northern Gulf of Alaska. Images captured by the sUAV in the high, mid and low intertidal strata on a rocky beach and within a seagrass bed were compared to data derived concurrently from observer visual surveys and to images taken by observers on the ground. Observer visual data always resulted in the highest taxon richness, but when observer data were aggregated to the lower taxonomic resolution obtained by the sUAV images, overall community composition was mostly similar between the two methods. Ground camera images and sUAV images yielded mostly comparable community composition despite the typically higher taxonomic resolution obtained by the ground camera. We conclude that monitoring goals or research questions that can be answered on a relatively coarse taxonomic level can benefit from an sUAV-based approach because it allows much larger spatial coverage within the time constraints of a low tide interval than is possible by observers on the ground. We demonstrated this large-scale applicability by using sUAV images to develop maps that show the distribution patterns and patchiness of seagrass.
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery.
Zhao, Yi; Ma, Jiale; Li, Xiaohui; Zhang, Jie
2018-02-27
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset 'UAV_Fire'. A 15-layered self-learning DCNN architecture named 'Fire_Net' is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, 'Fire_Net' guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified.
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
Zhao, Yi; Ma, Jiale; Li, Xiaohui
2018-01-01
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset ‘UAV_Fire’. A 15-layered self-learning DCNN architecture named ‘Fire_Net’ is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, ‘Fire_Net’ guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified. PMID:29495504
NASA Astrophysics Data System (ADS)
Osipov, Gennady
2013-04-01
We propose a solution to the problem of exploration of various mineral resource deposits, determination of their forms / classification of types (oil, gas, minerals, gold, etc.) with the help of satellite photography of the region of interest. Images received from satellite are processed and analyzed to reveal the presence of specific signs of deposits of various minerals. Course of data processing and making forecast can be divided into some stages: Pre-processing of images. Normalization of color and luminosity characteristics, determination of the necessary contrast level and integration of a great number of separate photos into a single map of the region are performed. Construction of semantic map image. Recognition of bitmapped image and allocation of objects and primitives known to system are realized. Intelligent analysis. At this stage acquired information is analyzed with the help of a knowledge base, which contain so-called "attention landscapes" of experts. Used methods of recognition and identification of images: a) combined method of image recognition, b)semantic analysis of posterized images, c) reconstruction of three-dimensional objects from bitmapped images, d)cognitive technology of processing and interpretation of images. This stage is fundamentally new and it distinguishes suggested technology from all others. Automatic registration of allocation of experts` attention - registration of so-called "attention landscape" of experts - is the base of the technology. Landscapes of attention are, essentially, highly effective filters that cut off unnecessary information and emphasize exactly the factors used by an expert for making a decision. The technology based on denoted principles involves the next stages, which are implemented in corresponding program agents. Training mode -> Creation of base of ophthalmologic images (OI) -> Processing and making generalized OI (GOI) -> Mode of recognition and interpretation of unknown images. Training mode includes noncontact registration of eye motion, reconstruction of "attention landscape" fixed by the expert, recording the comments of the expert who is a specialist in the field of images` interpretation, and transfer this information into knowledge base.Creation of base of ophthalmologic images (OI) includes making semantic contacts from great number of OI based on analysis of OI and expert's comments.Processing of OI and making generalized OI (GOI) is realized by inductive logic algorithms and consists in synthesis of structural invariants of OI. The mode of recognition and interpretation of unknown images consists of several stages, which include: comparison of unknown image with the base of structural invariants of OI; revealing of structural invariants in unknown images; ynthesis of interpretive message of the structural invariants base and OI base (the experts` comments stored in it). We want to emphasize that the training mode does not assume special involvement of experts to teach the system - it is realized in the process of regular experts` work on image interpretation and it becomes possible after installation of a special apparatus for non contact registration of experts` attention. Consequently, the technology, which principles is described there, provides fundamentally new effective solution to the problem of exploration of mineral resource deposits based on computer analysis of aerial and satellite image data.
Chew, Robert F; Amer, Safaa; Jones, Kasey; Unangst, Jennifer; Cajka, James; Allpress, Justine; Bruhn, Mark
2018-05-09
Conducting surveys in low- and middle-income countries is often challenging because many areas lack a complete sampling frame, have outdated census information, or have limited data available for designing and selecting a representative sample. Geosampling is a probability-based, gridded population sampling method that addresses some of these issues by using geographic information system (GIS) tools to create logistically manageable area units for sampling. GIS grid cells are overlaid to partition a country's existing administrative boundaries into area units that vary in size from 50 m × 50 m to 150 m × 150 m. To avoid sending interviewers to unoccupied areas, researchers manually classify grid cells as "residential" or "nonresidential" through visual inspection of aerial images. "Nonresidential" units are then excluded from sampling and data collection. This process of manually classifying sampling units has drawbacks since it is labor intensive, prone to human error, and creates the need for simplifying assumptions during calculation of design-based sampling weights. In this paper, we discuss the development of a deep learning classification model to predict whether aerial images are residential or nonresidential, thus reducing manual labor and eliminating the need for simplifying assumptions. On our test sets, the model performs comparable to a human-level baseline in both Nigeria (94.5% accuracy) and Guatemala (96.4% accuracy), and outperforms baseline machine learning models trained on crowdsourced or remote-sensed geospatial features. Additionally, our findings suggest that this approach can work well in new areas with relatively modest amounts of training data. Gridded population sampling methods like geosampling are becoming increasingly popular in countries with outdated or inaccurate census data because of their timeliness, flexibility, and cost. Using deep learning models directly on satellite images, we provide a novel method for sample frame construction that identifies residential gridded aerial units. In cases where manual classification of satellite images is used to (1) correct for errors in gridded population data sets or (2) classify grids where population estimates are unavailable, this methodology can help reduce annotation burden with comparable quality to human analysts.
Advanced Image Processing of Aerial Imagery
NASA Technical Reports Server (NTRS)
Woodell, Glenn; Jobson, Daniel J.; Rahman, Zia-ur; Hines, Glenn
2006-01-01
Aerial imagery of the Earth is an invaluable tool for the assessment of ground features, especially during times of disaster. Researchers at the NASA Langley Research Center have developed techniques which have proven to be useful for such imagery. Aerial imagery from various sources, including Langley's Boeing 757 Aries aircraft, has been studied extensively. This paper discusses these studies and demonstrates that better-than-observer imagery can be obtained even when visibility is severely compromised. A real-time, multi-spectral experimental system will be described and numerous examples will be shown.
Retrieve polarization aberration from image degradation: a new measurement method in DUV lithography
NASA Astrophysics Data System (ADS)
Xiang, Zhongbo; Li, Yanqiu
2017-10-01
Detailed knowledge of polarization aberration (PA) of projection lens in higher-NA DUV lithographic imaging is necessary due to its impact to imaging degradations, and precise measurement of PA is conductive to computational lithography techniques such as RET and OPC. Current in situ measurement method of PA thorough the detection of degradations of aerial images need to do linear approximation and apply the assumption of 3-beam/2-beam interference condition. The former approximation neglects the coupling effect of the PA coefficients, which would significantly influence the accuracy of PA retrieving. The latter assumption restricts the feasible pitch of test masks in higher-NA system, conflicts with the Kirhhoff diffraction model of test mask used in retrieving model, and introduces 3D mask effect as a source of retrieving error. In this paper, a new in situ measurement method of PA is proposed. It establishes the analytical quadratic relation between the PA coefficients and the degradations of aerial images of one-dimensional dense lines in coherent illumination through vector aerial imaging, which does not rely on the assumption of 3-beam/2- beam interference and linear approximation. In this case, the retrieval of PA from image degradation can be convert from the nonlinear system of m-quadratic equations to a multi-objective quadratic optimization problem, and finally be solved by nonlinear least square method. Some preliminary simulation results are given to demonstrate the correctness and accuracy of the new PA retrieving model.
Remote and In Situ Observations of Surfzone and Inner-Shelf Tracer Dispersion
NASA Astrophysics Data System (ADS)
Hally-Rosendahl, K.; Feddersen, F.; Clark, D.; Guza, R. T.
2014-12-01
Surfzone and inner-shelf tracer dispersion was observed at the approximately alongshore-uniform Imperial Beach, California during the IB09 experiment. Rhodamine dye tracer, released continuously near the shoreline for several hours, was advected alongshore by breaking wave- and wind-driven currents, and ejected offshore from the surfzone to the inner-shelf by transient rips. Aerial multispectral imaging of inner-shelf dye concentration complemented in situ surfzone and inner-shelf measurements of dye, temperature, waves, and currents, providing tracer transport and dispersion observations spanning approximately 400 m cross-shore and 3 km alongshore. Combined in situ and aerial measurements approximately close a surfzone and inner-shelf dye budget. Mean alongshore dye dilution follows a power-law relationship, and both spatial and temporal dye variability decrease with distance from the release. Aerial images reveal coherent inner-shelf dye plume structures extending over 300 m offshore with alongshore length scales up to 400 m. Plume tracking among successive images yields inner-shelf alongshore advection rates consistent with in situ observations. Alongshore advection is faster within the surfzone than on the inner-shelf, and the leading alongshore edge of inner-shelf dye is due to local transient rip ejections from the surfzone. A combination of in situ and aerial surfzone and inner-shelf measurements are used to quantify cross- and alongshore dye tracer transports. This work is funded by NSF (including a Graduate Research Fellowship, Grant No. DGE1144086), ONR, and California Sea Grant. Figure: Aerial multispectral image of surface dye concentration (parts per billion, see colorbar) versus cross-shore coordinate x and alongshore coordinate y, approximately 5 hours after the start of a continuous dye release (green star). The mean shoreline is at x=0 m. Dark gray indicates the beach and a pier, and light gray indicates regions outside the imaged area. Black indicates unresolved regions due to foam from wave breaking. Vertical dashed line delimits the surfzone (SZ) and inner-shelf (IS). Yellow diamonds indicate locations of in situ measurements of dye, temperature, waves, and currents. Yellow circles indicate locations of in situ dye and temperature measurements.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Cadastral Audit and Assessments Using Unmanned Aerial Systems
NASA Astrophysics Data System (ADS)
Cunningham, K.; Walker, G.; Stahlke, E.; Wilson, R.
2011-09-01
Ground surveys and remote sensing are integral to establishing fair and equitable property valuations necessary for real property taxation. The International Association of Assessing Officers (IAAO) has embraced aerial and street-view imaging as part of its standards related to property tax assessments and audits. New technologies, including unmanned aerial systems (UAS) paired with imaging sensors, will become more common as local governments work to ensure their cadastre and tax rolls are both accurate and complete. Trends in mapping technology have seen an evolution in platforms from large, expensive manned aircraft to very small, inexpensive UAS. Traditional methods of photogrammetry have also given way to new equipment and sensors: digital cameras, infrared imagers, light detection and ranging (LiDAR) laser scanners, and now synthetic aperture radar (SAR). At the University of Alaska Fairbanks (UAF), we work extensively with unmanned aerial systems equipped with each of these newer sensors. UAF has significant experience flying unmanned systems in the US National Airspace, having begun in 1969 with scientific rockets and expanded to unmanned aircraft in 2003. Ongoing field experience allows UAF to partner effectively with outside organizations to test and develop leading-edge research in UAS and remote sensing. This presentation will discuss our research related to various sensors and payloads for mapping. We will also share our experience with UAS and optical systems for creating some of the first cadastral surveys in rural Alaska.
27. AERIAL VIEW OF ARVFS FIELD TEST SITE AS IT ...
27. AERIAL VIEW OF ARVFS FIELD TEST SITE AS IT LOOKED IN 1983. OBLIQUE VIEW FACING EAST. BUNKER IS IN FOREGROUND, PROTECTIVE SHED FOR WFRP AT TOP OF IMAGE. INEL PHOTO NUMBER 83-574-12-1, TAKEN IN 1983. PHOTOGRAPHER: ROMERO. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
"A" Is for Aerial Maps and Art
ERIC Educational Resources Information Center
Todd, Reese H.; Delahunty, Tina
2007-01-01
The technology of satellite imagery and remote sensing adds a new dimension to teaching and learning about maps with elementary school children. Just a click of the mouse brings into view some images of the world that could only be imagined a generation ago. Close-up aerial pictures of the school and neighborhood quickly catch the interest of…
Reconstructing Buildings with Discontinuities and Roof Overhangs from Oblique Aerial Imagery
NASA Astrophysics Data System (ADS)
Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.
2017-05-01
This paper proposes a two-stage method for the reconstruction of city buildings with discontinuities and roof overhangs from oriented nadir and oblique aerial images. To model the structures the input data is transformed into a dense point cloud, segmented and filtered with a modified marching cubes algorithm to reduce the positional noise. Assuming a monolithic building the remaining vertices are initially projected onto a 2D grid and passed to RANSAC-based regression and topology analysis to geometrically determine finite wall, ground and roof planes. If this should fail due to the presence of discontinuities the regression will be repeated on a 3D level by traversing voxels within the regularly subdivided bounding box of the building point set. For each cube a planar piece of the current surface is approximated and expanded. The resulting segments get mutually intersected yielding both topological and geometrical nodes and edges. These entities will be eliminated if their distance-based affiliation to the defining point sets is violated leaving a consistent building hull including its structural breaks. To add the roof overhangs the computed polygonal meshes are projected onto the digital surface model derived from the point cloud. Their shapes are offset equally along the edge normals with subpixel accuracy by detecting the zero-crossings of the second-order directional derivative in the gradient direction of the height bitmap and translated back into world space to become a component of the building. As soon as the reconstructed objects are finished the aerial images are further used to generate a compact texture atlas for visualization purposes. An optimized atlas bitmap is generated that allows perspectivecorrect multi-source texture mapping without prior rectification involving a partially parallel placement algorithm. Moreover, the texture atlases undergo object-based image analysis (OBIA) to detect window areas which get reintegrated into the building models. To evaluate the performance of the proposed method a proof-of-concept test on sample structures obtained from real-world data of Heligoland/Germany has been conducted. It revealed good reconstruction accuracy in comparison to the cadastral map, a speed-up in texture atlas optimization and visually attractive render results.
Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheriyadat, Anil M
2011-01-01
Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less
Comparison of line shortening assessed by aerial image and wafer measurements
NASA Astrophysics Data System (ADS)
Ziegler, Wolfram; Pforr, Rainer; Thiele, Joerg; Maurer, Wilhelm
1997-02-01
Increasing number of patterns per area and decreasing linewidth demand enhancement technologies for optical lithography. OPC, the correction of systematic non-linearity in the pattern transfer process by correction of design data is one possibility to tighten process control and to increase the lifetime of existing lithographic equipment. The two most prominent proximity effects to be corrected by OPC are CD variation and line shortening. Line shortening measured on a wafer is up to 2 times larger than full resist simulation results. Therefore, the influence of mask geometry to line shortening is a key item to parameterize lithography. The following paper discusses the effect of adding small serifs to line ends with 0.25 micrometer ground-rule design. For reticles produced on an ALTA 3000 with standard wet etch process, the corner rounding on them mask can be reduced by adding serifs of a certain size. The corner rounding was measured and the effect on line shortening on the wafer is determined. This was investigated by resist measurements on wafer, aerial image plus resist simulation and aerial image measurements on the AIMS microscope.
He, Hong; Cheng, Xiao; Li, Xianglan; Zhu, Renbin; Hui, Fengming; Wu, Wenhui; Zhao, Tiancheng; Kang, Jing; Tang, Jianwu
2017-10-11
Penguin guano provides favorable conditions for production and emission of greenhouse gases (GHGs). Many studies have been conducted to determine the GHG fluxes from penguin colonies, however, at regional scale, there is still no accurate estimation of total GHG emissions. We used object-based image analysis (OBIA) method to estimate the Adélie penguin (Pygoscelis adeliae) population based on aerial photography data. A model was developed to estimate total GHG emission potential from Adélie penguin colonies during breeding seasons in 1983 and 2012, respectively. Results indicated that OBIA method was effective for extracting penguin information from aerial photographs. There were 17,120 and 21,183 Adélie penguin breeding pairs on Inexpressible Island in 1983 and 2012, respectively, with overall accuracy of the estimation of 76.8%. The main reasons for the increase in Adélie penguin populations were attributed to increase in temperature, sea ice and phytoplankton. The average estimated CH 4 and N 2 O emissions tended to be increasing during the period from 1983 to 2012 and CH 4 was the main GHG emitted from penguin colonies. Total global warming potential (GWP) of CH 4 and N 2 O emissions was 5303 kg CO 2 -eq in 1983 and 6561 kg CO 2 -eq in 2012, respectively.
High Resolution UAV-based Passive Microwave L-band Imaging of Soil Moisture
NASA Astrophysics Data System (ADS)
Gasiewski, A. J.; Stachura, M.; Elston, J.; McIntyre, E. M.
2013-12-01
Due to long electrical wavelengths and aperture size limitations the scaling of passive microwave remote sensing of soil moisture from spaceborne low-resolution applications to high resolution applications suitable for precision agriculture requires use of low flying aerial vehicles. This presentation summarizes a project to develop a commercial Unmanned Aerial Vehicle (UAV) hosting a precision microwave radiometer for mapping of soil moisture in high-value shallow root-zone crops. The project is based on the use of the Tempest electric-powered UAV and a compact digital L-band (1400-1427 MHz) passive microwave radiometer developed specifically for extremely small and lightweight aerial platforms or man-portable, tractor, or tower-based applications. Notable in this combination are a highly integrated UAV/radiometer antenna design and use of both the upwelling emitted signal from the surface and downwelling cold space signal for precise calibration using a lobe-correlating radiometer architecture. The system achieves a spatial resolution comparable to the altitude of the UAV above the ground while referencing upwelling measurements to the constant and well-known background temperature of cold space. The radiometer incorporates digital sampling and radio frequency interference mitigation along with infrared, near-infrared, and visible (red) sensors for surface temperature and vegetation biomass correction. This NASA-sponsored project is being developed both for commercial application in cropland water management, L-band satellite validation, and estuarian plume studies.
Performance Evaluation of 3d Modeling Software for Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Yanagi, H.; Chikatsu, H.
2016-06-01
UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.
Using aerial images for establishing a workflow for the quantification of water management measures
NASA Astrophysics Data System (ADS)
Leuschner, Annette; Merz, Christoph; van Gasselt, Stephan; Steidl, Jörg
2017-04-01
Quantified landscape characteristics, such as morphology, land use or hydrological conditions, play an important role for hydrological investigations as landscape parameters directly control the overall water balance. A powerful assimilation and geospatial analysis of remote sensing datasets in combination with hydrological modeling allows to quantify landscape parameters and water balances efficiently. This study focuses on the development of a workflow to extract hydrologically relevant data from aerial image datasets and derived products in order to allow an effective parametrization of a hydrological model. Consistent and self-contained data source are indispensable for achieving reasonable modeling results. In order to minimize uncertainties and inconsistencies, input parameters for modeling should be extracted from one remote-sensing dataset mainly if possbile. Here, aerial images have been chosen because of their high spatial and spectral resolution that permits the extraction of various model relevant parameters, like morphology, land-use or artificial drainage-systems. The methodological repertoire to extract environmental parameters range from analyses of digital terrain models, multispectral classification and segmentation of land use distribution maps and mapping of artificial drainage-systems based on spectral and visual inspection. The workflow has been tested for a mesoscale catchment area which forms a characteristic hydrological system of a young moraine landscape located in the state of Brandenburg, Germany. These dataset were used as input-dataset for multi-temporal hydrological modelling of water balances to detect and quantify anthropogenic and meteorological impacts. ArcSWAT, as a GIS-implemented extension and graphical user input interface for the Soil Water Assessment Tool (SWAT) was chosen. The results of this modeling approach provide the basis for anticipating future development of the hydrological system, and regarding system changes for the adaption of water resource management decisions.
Thermal/structural/optical integrated design for optical sensor mounted on unmanned aerial vehicle
NASA Astrophysics Data System (ADS)
Zhang, Gaopeng; Yang, Hongtao; Mei, Chao; Wu, Dengshan; Shi, Kui
2016-01-01
With the rapid development of science and technology and the promotion of many local wars in the world, altitude optical sensor mounted on unmanned aerial vehicle is more widely applied in the airborne remote sensing, measurement and detection. In order to obtain high quality image of the aero optical remote sensor, it is important to analysis its thermal-optical performance on the condition of high speed and high altitude. Especially for the key imaging assembly, such as optical window, the temperature variation and temperature gradient can result in defocus and aberrations in optical system, which will lead to the poor quality image. In order to improve the optical performance of a high speed aerial camera optical window, the thermal/structural/optical integrated design method is developed. Firstly, the flight environment of optical window is analyzed. Based on the theory of aerodynamics and heat transfer, the convection heat transfer coefficient is calculated. The temperature distributing of optical window is simulated by the finite element analysis software. The maximum difference in temperature of the inside and outside of optical window is obtained. Then the deformation of optical window under the boundary condition of the maximum difference in temperature is calculated. The optical window surface deformation is fitted in Zernike polynomial as the interface, the calculated Zernike fitting coefficients is brought in and analyzed by CodeV Optical Software. At last, the transfer function diagrams of the optical system on temperature field are comparatively analyzed. By comparing and analyzing the result, it can be obtained that the optical path difference caused by thermal deformation of the optical window is 138.2 nm, which is under PV ≤1 4λ . The above study can be used as an important reference for other optical window designs.
General Aviation Citizen Science Study to Help Tackle Remote Sensing of Harmful Algal Blooms (HABs)
NASA Technical Reports Server (NTRS)
Ansari, Rafat R.; Schubert, Terry
2018-01-01
We present a new, low-cost approach, based on volunteer pilots conducting high-resolution aerial imaging, to help document the onset, growth, and outbreak of harmful algal blooms (HABs) and related water quality issues in central and western Lake Erie. In this model study, volunteer private pilots acting as citizen scientists frequently flew over 200 mi of Lake Erie coastline, its islands, and freshwater estuaries, taking high-quality aerial photographs and videos. The photographs were taken in the nadir (vertical) position in red, green, and blue (RGB) and near-infrared (NIR) every 5 s with rugged, commercially available built-in Global Positioning System (GPS) cameras. The high-definition (HD) videos in 1080p format were taken continuously in an oblique forward direction. The unobstructed, georeferenced, high-resolution images, and HD videos can provide an early warning of ensuing HAB events to coastal communities and freshwater resource managers. The scientists and academic researchers can use the data to compliment a collection of in situ water measurements, matching satellite imagery, and help develop advanced airborne instrumentation, and validation of their algorithms. This data may help develop empirical models, which may lead to the next steps in predicting a HAB event as some watershed observed events changed the water quality such as particle size, sedimentation, color, mineralogy, and turbidity delivered to the Lake site. This paper shows the efficacy and scalability of citizen science (CS) aerial imaging as a complimentary tool for rapid emergency response in HABs monitoring, land and vegetation management, and scientific studies. This study can serve as a model for monitoring/management of freshwater and marine aquatic systems.
Cameras and settings for optimal image capture from UAVs
NASA Astrophysics Data System (ADS)
Smith, Mike; O'Connor, James; James, Mike R.
2017-04-01
Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.
Sanchez, Richard D.; Hudnut, Kenneth W.
2004-01-01
Aerial mapping of the San Andreas Fault System can be realized more efficiently and rapidly without ground control and conventional aerotriangulation. This is achieved by the direct geopositioning of the exterior orientation of a digital imaging sensor by use of an integrated Global Positioning System (GPS) receiver and an Inertial Navigation System (INS). A crucial issue to this particular type of aerial mapping is the accuracy, scale, consistency, and speed achievable by such a system. To address these questions, an Applanix Digital Sensor System (DSS) was used to examine its potential for near real-time mapping. Large segments of vegetation along the San Andreas and Cucamonga faults near the foothills of the San Bernardino and San Gabriel Mountains were burned to the ground in the California wildfires of October-November 2003. A 175 km corridor through what once was a thickly vegetated and hidden fault surface was chosen for this study. Both faults pose a major hazard to the greater Los Angeles metropolitan area and a near real-time mapping system could provide information vital to a post-disaster response.
Cooperative Lander-Surface/Aerial Microflyer Missions for Mars Exploration
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Lay, Norman; Hine, Butler; Zornetzer, Steven
2004-01-01
Concepts are being investigated for exploratory missions to Mars based on Bioinspired Engineering of Exploration Systems (BEES), which is a guiding principle of this effort to develop biomorphic explorers. The novelty lies in the use of a robust telecom architecture for mission data return, utilizing multiple local relays (including the lander itself as a local relay and the explorers in the dual role of a local relay) to enable ranges 10 to 1,000 km and downlink of color imagery. As illustrated in Figure 1, multiple microflyers that can be both surface or aerially launched are envisioned in shepherding, metamorphic, and imaging roles. These microflyers imbibe key bio-inspired principles in their flight control, navigation, and visual search operations. Honey-bee inspired algorithms utilizing visual cues to perform autonomous navigation operations such as terrain following will be utilized. The instrument suite will consist of a panoramic imager and polarization imager specifically optimized to detect ice and water. For microflyers, particularly at small sizes, bio-inspired solutions appear to offer better alternate solutions than conventional engineered approaches. This investigation addresses a wide range of interrelated issues, including desired scientific data, sizes, rates, and communication ranges that can be accomplished in alternative mission scenarios. The mission illustrated in Figure 1 offers the most robust telecom architecture and the longest range for exploration with two landers being available as main local relays in addition to an ephemeral aerial probe local relay. The shepherding or metamorphic plane are in their dual role as local relays and image data collection/storage nodes. Appropriate placement of the landing site for the scout lander with respect to the main mission lander can allow coverage of extremely large ranges and enable exhaustive survey of the area of interest. In particular, this mission could help with the path planning and risk mitigation in the traverse of the long-distance surface explorer/rover. The basic requirements of design and operation of BEES to implement the scenarios are discussed. Terrestrial applications of such concepts include distributed aerial/surface measurements of meteorological events, i.e., storm watch, seismic monitoring, reconnaissance, biological chemical sensing, search and rescue, surveillance, autonomous security/ protection agents, and/or delivery and lateral distribution of agents (sensors, surface/subsurface crawlers, clean-up agents). Figure 2 illustrates an Earth demonstration that is in development, and its implementation will illustrate the value of these biomorphic mission concepts.
NASA Astrophysics Data System (ADS)
Ma, Yi; Zhang, Jie; Zhang, Jingyu
2016-01-01
The coastal wetland, a transitional zone between terrestrial ecosystems and marine ecosystems, is the type of great value to ecosystem services. For the recent 3 decades, area of the coastal wetland is decreasing and the ecological function is gradually degraded with the rapid development of economy, which restricts the sustainable development of economy and society in the coastal areas of China in turn. It is a major demand of the national reality to carry out the monitoring of coastal wetlands, to master the distribution and dynamic change. UAV, namely unmanned aerial vehicle, is a new platform for remote sensing. Compared with the traditional satellite and manned aerial remote sensing, it has the advantage of flexible implementation, no cloud cover, strong initiative and low cost. Image-spectrum merging is one character of high spectral remote sensing. At the same time of imaging, the spectral curve of each pixel is obtained, which is suitable for quantitative remote sensing, fine classification and target detection. Aimed at the frontier and hotspot of remote sensing monitoring technology, and faced the demand of the coastal wetland monitoring, this paper used UAV and the new remote sensor of high spectral imaging instrument to carry out the analysis of the key technologies of monitoring coastal wetlands by UAV on the basis of the current situation in overseas and domestic and the analysis of developing trend. According to the characteristic of airborne hyperspectral data on UAV, that is "three high and one many", the key technology research that should develop are promoted as follows: 1) the atmosphere correction of the UAV hyperspectral in coastal wetlands under the circumstance of complex underlying surface and variable geometry, 2) the best observation scale and scale transformation method of the UAV platform while monitoring the coastal wetland features, 3) the classification and detection method of typical features with high precision from multi scale hyperspectral images based on time sequence. The research results of this paper will help to break the traditional concept of remote sensing monitoring coastal wetlands by satellite and manned aerial vehicle, lead the trend of this monitoring technology, and put forward a new technical proposal for grasping the distribution of the coastal wetland and the changing trend and carrying out the protection and management of the coastal wetland.
Radiometric Normalization of Large Airborne Image Data Sets Acquired by Different Sensor Types
NASA Astrophysics Data System (ADS)
Gehrke, S.; Beshah, B. T.
2016-06-01
Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor's properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling - with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images - allows for adaptation to each sensor's geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image's histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in HxMap software. It has been successfully applied to large sets of heterogeneous imagery, including the adjustment of original sensor images prior to quality control and further processing as well as radiometric adjustment for ortho-image mosaic generation.
NASA Astrophysics Data System (ADS)
Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup
2017-06-01
This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
EUV focus sensor: design and modeling
NASA Astrophysics Data System (ADS)
Goldberg, Kenneth A.; Teyssier, Maureen E.; Liddle, J. Alexander
2005-05-01
We describe performance modeling and design optimization of a prototype EUV focus sensor (FS) designed for use with existing 0.3-NA EUV projection-lithography tools. At 0.3-NA and 13.5-nm wavelength, the depth of focus shrinks to 150 nm increasing the importance of high-sensitivity focal-plane detection tools. The FS is a free-standing Ni grating structure that works in concert with a simple mask pattern of regular lines and spaces at constant pitch. The FS pitch matches that of the image-plane aerial-image intensity: it transmits the light with high efficiency when the grating is aligned with the aerial image laterally and longitudinally. Using a single-element photodetector, to detect the transmitted flux, the FS is scanned laterally and longitudinally so the plane of peak aerial-image contrast can be found. The design under consideration has a fixed image-plane pitch of 80-nm, with aperture widths of 12-40-nm (1-3 wave-lengths), and aspect ratios of 2-8. TEMPEST-3D is used to model the light transmission. Careful attention is paid to the annular, partially coherent, unpolarized illumination and to the annular pupil of the Micro-Exposure Tool (MET) optics for which the FS is designed. The system design balances the opposing needs of high sensitivity and high throughput opti-mizing the signal-to-noise ratio in the measured intensity contrast.
EUV Focus Sensor: Design and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Kenneth A.; Teyssier, Maureen E.; Liddle, J. Alexander
We describe performance modeling and design optimization of a prototype EUV focus sensor (FS) designed for use with existing 0.3-NA EUV projection-lithography tools. At 0.3-NA and 13.5-nm wavelength, the depth of focus shrinks to 150 nm increasing the importance of high-sensitivity focal-plane detection tools. The FS is a free-standing Ni grating structure that works in concert with a simple mask pattern of regular lines and spaces at constant pitch. The FS pitch matches that of the image-plane aerial-image intensity: it transmits the light with high efficiency when the grating is aligned with the aerial image laterally and longitudinally. Using amore » single-element photodetector, to detect the transmitted flux, the FS is scanned laterally and longitudinally so the plane of peak aerial-image contrast can be found. The design under consideration has a fixed image-plane pitch of 80-nm, with aperture widths of 12-40-nm (1-3 wavelengths), and aspect ratios of 2-8. TEMPEST-3D is used to model the light transmission. Careful attention is paid to the annular, partially coherent, unpolarized illumination and to the annular pupil of the Micro-Exposure Tool (MET) optics for which the FS is designed. The system design balances the opposing needs of high sensitivity and high throughput optimizing the signal-to-noise ratio in the measured intensity contrast.« less
NASA Astrophysics Data System (ADS)
Chen, Su-Chin; Hsiao, Yu-Shen; Chung, Ta-Hsien
2015-04-01
This study is aimed at determining the landslide and driftwood potentials at Shenmu area in Taiwan by Unmanned Aerial Vehicle (UAV). High-resolution orthomosaics and digital surface models (DSMs) are both obtained from several UAV practical surveys by using a red-green-blue(RGB) camera and a near-infrared(NIR) one, respectively. Couples of artificial aerial survey targets are used for ground control in photogrammtry. The algorithm for this study is based on Logistic regression. 8 main factors, which are elevations, terrain slopes, terrain aspects, terrain reliefs, terrain roughness, distances to roads, distances to rivers, land utilizations, are taken into consideration in our Logistic regression model. The related results from UAV are compared with those from traditional photogrammetry. Overall, the study is focusing on monitoring the distribution of the areas with high-risk landslide and driftwood potentials in Shenmu area by Fixed-wing UAV-Borne RGB and NIR images. We also further analyze the relationship between forests, landslides, disaster potentials and upper river areas.
Development of a Micro-UAV Hyperspectral Imaging Platform for Assessing Hydrogeological Hazards
NASA Astrophysics Data System (ADS)
Chen, Z.; Alabsi, M.
2015-12-01
The exacerbating global weather changes have cast significant impacts upon the proportion of water supplied to agriculture. Therefore, one of the 21stCentury Grant Challenges faced by global population is securing water for food. However, the soil-water behavior in an agricultural environment is complex; among others, one of the key properties we recognize is water repellence or hydrophobicity, which affects many hydrogeological and hazardous conditions such as excessive water infiltration, runoff, and soil erosion. Under a US-Israel research program funded by USDA and BARD at Israel, we have proposed the development of a novel micro-unmanned aerial vehicle (micro-UAV or drone) based hyperspectral imaging platform for identifying and assessing soil repellence at low altitudes with enhanced flexibility, much reduced cost, and ultimately easy use. This aerial imaging system consists of a generic micro-UAV, hyperspectral sensor aided by GPS/IMU, on-board computing units, and a ground station. The target benefits of this system include: (1) programmable waypoint navigation and robotic control for multi-view imaging; (2) ability of two- or three-dimensional scene reconstruction for complex terrains; and (3) fusion with other sensors to realize real-time diagnosis (e.g., humidity and solar irradiation that may affect soil-water sensing). In this talk we present our methodology and processes in integration of hyperspectral imaging, on-board sensing and computing, hyperspectral data modeling, and preliminary field demonstration and verification of the developed prototype.
Notable environmental features in some historical aerial photographs from Ashley Country, Arkansas
Don C. Bragg; Robert C. Jr. Weih
2007-01-01
A collection of 1939 aerial photographs from Ashley County, Arkansas was analyzed for its environmental information. Taken by the US Department of Defense (USDOD), these images show a number of features now either obscured or completely eliminated over the passage of time. One notable feature is the widespread coverage of "sand blows" in the eastern quarter...
We conducted aerial photographic surveys of Oregon's Yaquina Bay estuary during consecutive summers from 1997 through 2001. Imagery was obtained during low tide exposures of intertidal mudflats, allowing use of near-infrared color film to detect and discriminate plant communitie...
Aerial photographic surveys of Oregon's Yaquina Bay estuary were conducted during consecutive summers from 1997 through 2000. Imagery was obtained during low tide exposures of intertidal mudflats, allowing use of near-infrared color film to detect and discriminate plant communit...
Rapid mapping of landslide disaster using UAV- photogrammetry
NASA Astrophysics Data System (ADS)
Cahyono, A. B.; Zayd, R. A.
2018-03-01
Unmanned Aerial Vehicle (UAV) systems offered many advantages in several mapping applications such as slope mapping, geohazard studies, etc. This study utilizes UAV system for landslide disaster occurred in Jombang Regency, East Java. This study concentrates on type of rotor-wing UAV, that is because rotor wing units are stable and able to capture images easily. Aerial photograph were acquired in the form of strips which followed the procedure of acquiring aerial photograph where taken 60 photos. Secondary data of ground control points using GPS Geodetic and check points established using Total Station technique was used. The digital camera was calibrated using close range photogrammetric software and the recovered camera calibration parameters were then used in the processing of digital images. All the aerial photographs were processed using digital photogrammetric software and the output in the form of orthophoto was produced. The final result shows a 1: 1500 scale orthophoto map from the data processing with SfM algorithm with GSD accuracy of 3.45 cm. And the calculated volume of contour line delineation of 10527.03 m3. The result is significantly different from the result of terrestrial methode equal to 964.67 m3 or 8.4% of the difference of both.
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe
2017-01-01
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian
2017-07-18
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.
Automated Verification of Spatial Resolution in Remotely Sensed Imagery
NASA Technical Reports Server (NTRS)
Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald
2011-01-01
Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data set, enabling the appropriate use of those images in a number of applications.
a Fast Approach for Stitching of Aerial Images
NASA Astrophysics Data System (ADS)
Moussa, A.; El-Sheimy, N.
2016-06-01
The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image's coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.
Using Remotely Sensed Data to Automate and Improve Census Bureau Update Activities
NASA Astrophysics Data System (ADS)
Desch, A., IV
2017-12-01
Location of established and new housing structures is fundamental in the Census Bureau's planning and execution of each decennial census. Past Census address list compilation and update programs have involved sending more than 100,000 workers into the field to find and verify housing units. The 2020 Census program has introduced an imagery based In-Office Address Canvassing Interactive Review (IOAC-IR) program in an attempt to reduce the in-field workload. The human analyst driven, aerial image based IOAC-IR operation has proven to be a cost effective and accurate substitute for a large portion of the expensive in-field address canvassing operations. However, the IOAC-IR still required more than a year to complete and over 100 full-time dedicated employees. Much of the basic image analysis work done in IOAC-IR can be handled with established remote sensing and computer vision techniques. The experience gained from the Interactive Review phase of In-Office Address Canvassing has led to the development of a prototype geo-processing tool to automate much of this process for future and ongoing Address Canvassing operations. This prototype utilizes high-resolution aerial imagery and LiDAR to identify structures and compare their location to existing Census geographic information. In this presentation, we report on the comparison of this exploratory system's results to the human based IOAC-IR. The experimental image and LiDAR based change detection approach has itself led to very promising follow-on experiments utilizing very current, high repeat datasets and scalable cloud computing. We will discuss how these new techniques can be used to both aid the US Census Bureau meet its goals of identify all the housing units in the US as well as aid developing countries better identify where there population is currently distributed.
Exploring the Potential of Aerial Photogrammetry for 3d Modelling of High-Alpine Environments
NASA Astrophysics Data System (ADS)
Legat, K.; Moe, K.; Poli, D.; Bollmannb, E.
2016-03-01
High-alpine areas are subject to rapid topographic changes, mainly caused by natural processes like glacial retreat and other geomorphological processes, and also due to anthropogenic interventions like construction of slopes and infrastructure in skiing resorts. Consequently, the demand for highly accurate digital terrain models (DTMs) in alpine environments has arisen. Public administrations often have dedicated resources for the regular monitoring of glaciers and natural hazard processes. In case of glaciers, traditional monitoring encompasses in-situ measurements of area and length and the estimation of volume and mass changes. Next to field measurements, data for such monitoring programs can be derived from DTMs and digital ortho photos (DOPs). Skiing resorts, on the other hand, require DTMs as input for planning and - more recently - for RTK-GNSS supported ski-slope grooming. Although different in scope, the demand of both user groups is similar: high-quality and up-to-date terrain data for extended areas often characterised by difficult accessibility and large elevation ranges. Over the last two decades, airborne laser scanning (ALS) has replaced photogrammetric approaches as state-of-the-art technology for the acquisition of high-resolution DTMs also in alpine environments. Reasons include the higher productivity compared to (manual) stereo-photogrammetric measurements, canopy-penetration capability, and limitations of photo measurements on sparsely textured surfaces like snow or ice. Nevertheless, the last few years have shown strong technological advances in the field of aerial camera technology, image processing and photogrammetric software which led to new possibilities for image-based DTM generation even in alpine terrain. At Vermessung AVT, an Austrian-based surveying company, and its subsidiary Terra Messflug, very promising results have been achieved for various projects in high-alpine environments, using images acquired by large-format digital cameras of Microsoft's UltraCam series and the in-house processing chain centred on the Dense-Image-Matching (DIM) software SURE by nFrames. This paper reports the work carried out at AVT for the surface- and terrain modelling of several high-alpine areas using DIM- and ALS-based approaches. A special focus is dedicated to the influence of terrain morphology, flight planning, GNSS/IMU measurements, and ground-control distribution in the georeferencing process on the data quality. Based on the very promising results, some general recommendations for aerial photogrammetry processing in high-alpine areas are made to achieve best possible accuracy of the final 3D-, 2.5D- and 2D products.
Positional accuracy and geographic bias of four methods of geocoding in epidemiologic research.
Schootman, Mario; Sterling, David A; Struthers, James; Yan, Yan; Laboube, Ted; Emo, Brett; Higgs, Gary
2007-06-01
We examined the geographic bias of four methods of geocoding addresses using ArcGIS, commercial firm, SAS/GIS, and aerial photography. We compared "point-in-polygon" (ArcGIS, commercial firm, and aerial photography) and the "look-up table" method (SAS/GIS) to allocate addresses to census geography, particularly as it relates to census-based poverty rates. We randomly selected 299 addresses of children treated for asthma at an urban emergency department (1999-2001). The coordinates of the building address side door were obtained by constant offset based on ArcGIS and a commercial firm and true ground location based on aerial photography. Coordinates were available for 261 addresses across all methods. For 24% to 30% of geocoded road/door coordinates the positional error was 51 meters or greater, which was similar across geocoding methods. The mean bearing was -26.8 degrees for the vector of coordinates based on aerial photography and ArcGIS and 8.5 degrees for the vector based on aerial photography and the commercial firm (p < 0.0001). ArcGIS and the commercial firm performed very well relative to SAS/GIS in terms of allocation to census geography. For 20%, the door location based on aerial photography was assigned to a different block group compared to SAS/GIS. The block group poverty rate varied at least two standard deviations for 6% to 7% of addresses. We found important differences in distance and bearing between geocoding relative to aerial photography. Allocation of locations based on aerial photography to census-based geographic areas could lead to substantial errors.
NASA Astrophysics Data System (ADS)
Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian
2018-04-01
The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.
Investigation of Parallax Issues for Multi-Lens Multispectral Camera Band Co-Registration
NASA Astrophysics Data System (ADS)
Jhan, J. P.; Rau, J. Y.; Haala, N.; Cramer, M.
2017-08-01
The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.
Employing unmanned aerial vehicle to monitor the health condition of wind turbines
NASA Astrophysics Data System (ADS)
Huang, Yishuo; Chiang, Chih-Hung; Hsu, Keng-Tsang; Cheng, Chia-Chi
2018-04-01
Unmanned aerial vehicle (UAV) can gather the spatial information of huge structures, such as wind turbines, that can be difficult to obtain with traditional approaches. In this paper, the UAV used in the experiments is equipped with high resolution camera and thermal infrared camera. The high resolution camera can provide a series of images with resolution up to 10 Megapixels. Those images can be used to form the 3D model using the digital photogrammetry technique. By comparing the 3D scenes of the same wind turbine at different times, possible displacement of the supporting tower of the wind turbine, caused by ground movement or foundation deterioration may be determined. The recorded thermal images are analyzed by applying the image segmentation methods to the surface temperature distribution. A series of sub-regions are separated by the differences of the surface temperature. The high-resolution optical image and the segmented thermal image are fused such that the surface anomalies are more easily identified for wind turbines.
Open Skies aerial photography of selected areas in Central America affected by Hurricane Mitch
Molnia, Bruce; Hallam, Cheryl A.
1999-01-01
Between October 27 and November 1, 1998, Central America was devastated by Hurricane Mitch. Following a humanitarian relief effort, one of the first informational needs was complete aerial photographic coverage of the storm ravaged areas so that the governments of the affected countries, the U.S. agencies planning to provide assistance, and the international relief community could come to the aid of the residents of the devastated area. Between December 4 and 19, 1998 an Open Skies aircraft conducted five successful missions and obtained more than 5,000 high-resolution aerial photographs and more than 15,000 video images. The aerial data are being used by the Reconstruction Task Force and many others who are working to begin rebuilding and to help reduce the risk of future destruction.
Pict'Earth: A new Method of Virtual Globe Data Acquisition
NASA Astrophysics Data System (ADS)
Johnson, J.; Long, S.; Riallant, D.; Hronusov, V.
2007-12-01
Georeferenced aerial imagery facilitates and enhances Earth science investigations. The realized value of imagery as a tool is measured from the spatial, temporal and radiometric resolution of the imagery. Currently, there is an need for a system which facilitates the rapid acquisition and distribution of high-resolution aerial earth images of localized areas. The Pict'Earth group has developed an apparatus and software algorithms which facilitate such tasks. Hardware includes a small radio-controlled model airplane (RC UAV); Light smartphones with high resolution cameras (Nokia NSeries Devices); and a GPS connected to the smartphone via the bluetooth protocol, or GPS-equipped phone. Software includes python code which controls the functions of the smartphone and GPS to acquire data in-flight; Online Virtual Globe applications including Google Earth, AJAX/Web2.0 technologies and services; APIs and libraries for developers, all of which are based on open XML-based GIS data standards. This new process for acquisition and distribution of high-resolution aerial earth images includes the following stages: Perform Survey over area of interest (AOI) with the RC UAV (Mobile Liveprocessing). In real-time our software collects images from the smartphone camera and positional data (latitude, longitude, altitude and heading) from the GPS. The software then calculates the earth footprint (geoprint) of each image and creates KML files which incorporate the georeferenced images and tracks of UAV. Optionally, it is possible to send the data in- flight via SMS/MMS (text and multimedia messages), or cellular internet networks via FTP. In Post processing the images are filtered, transformed, and assembled into a orthorectified image mosaic. The final mosaic is then cut into tiles and uploaded as a user ready product to web servers in kml format for use in Virtual Globes and other GIS applications. The obtained images and resultant data have high spatial resolution, can be updated in near-real time (high temporal resolution), and provide current radiance values (which is important for seasonal work). The final mosaics can also be assembled into time-lapse sequences and presented temporally. The suggested solution is cost effective when compared to the alternative methods of acquiring similar imagery. The systems are compact, mobile, and do not require a substantial amount of auxiliary equipment. Ongoing development of the software makes it possible to adapt the technology to different platforms, smartphones, sensors, and types of data. The range of application of this technology potentially covers a large part of the spectrum of Earth sciences including the calibration and validation of high-resolution satellite-derived products. These systems are currently being used for monitoring of dynamic land and water surface processes, and can be used for reconnaissance when locating and establishing field measurement sites.
NASA Astrophysics Data System (ADS)
Svejkovsky, Jan; Nezlin, Nikolay P.; Mustain, Neomi M.; Kum, Jamie B.
2010-04-01
Spatial-temporal characteristics and environmental factors regulating the behavior of stormwater runoff from the Tijuana River in southern California were analyzed utilizing very high resolution aerial imagery, and time-coincident environmental and bacterial sampling data. Thirty nine multispectral aerial images with 2.1-m spatial resolution were collected after major rainstorms during 2003-2008. Utilizing differences in color reflectance characteristics, the ocean surface was classified into non-plume waters and three components of the runoff plume reflecting differences in age and suspended sediment concentrations. Tijuana River discharge rate was the primary factor regulating the size of the freshest plume component and its shorelong extensions to the north and south. Wave direction was found to affect the shorelong distribution of the shoreline-connected fresh plume components much more strongly than wind direction. Wave-driven sediment resuspension also significantly contributed to the size of the oldest plume component. Surf zone bacterial samples collected near the time of each image acquisition were used to evaluate the contamination characteristics of each plume component. The bacterial contamination of the freshest plume waters was very high (100% of surf zone samples exceeded California standards), but the oldest plume areas were heterogeneous, including both polluted and clean waters. The aerial imagery archive allowed study of river runoff characteristics on a plume component level, not previously done with coarser satellite images. Our findings suggest that high resolution imaging can quickly identify the spatial extents of the most polluted runoff but cannot be relied upon to always identify the entire polluted area. Our results also indicate that wave-driven transport is important in distributing the most contaminated plume areas along the shoreline.
Remote sensing based water-use efficiency evaluation in sub-surface irrigated wine grape vines
NASA Astrophysics Data System (ADS)
Zúñiga, Carlos Espinoza; Khot, Lav R.; Jacoby, Pete; Sankaran, Sindhuja
2016-05-01
Increased water demands have forced agriculture industry to investigate better irrigation management strategies in crop production. Efficient irrigation systems, improved irrigation scheduling, and selection of crop varieties with better water-use efficiencies can aid towards conserving water. In an ongoing experiment carried on in Red Mountain American Viticulture area near Benton City, Washington, subsurface drip irrigation treatments at 30, 60 and 90 cm depth, and 15, 30 and 60% irrigation were applied to satisfy evapotranspiration demand using pulse and continuous irrigation. These treatments were compared to continuous surface irrigation applied at 100% evapotranspiration demand. Thermal infrared and multispectral images were acquired using unmanned aerial vehicle during the growing season. Obtained results indicated no difference in yield among treatments (p<0.05), however there was statistical difference in leaf temperature comparing surface and subsurface irrigation (p<0.05). Normalized vegetation index obtained from the analysis of multispectral images showed statistical difference among treatments when surface and subsurface irrigation methods were compared. Similar differences in vegetation index values were observed, when irrigation rates were compared. Obtained results show the applicability of aerial thermal infrared and multispectral images to characterize plant responses to different irrigation treatments and use of such information in irrigation scheduling or high-throughput selection of water-use efficient crop varieties in plant breeding.
Film cameras or digital sensors? The challenge ahead for aerial imaging
Light, D.L.
1996-01-01
Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.
The future of structural fieldwork - UAV assisted aerial photogrammetry
NASA Astrophysics Data System (ADS)
Vollgger, Stefan; Cruden, Alexander
2015-04-01
Unmanned aerial vehicles (UAVs), commonly referred to as drones, are opening new and low cost possibilities to acquire high-resolution aerial images and digital surface models (DSM) for applications in structural geology. UAVs can be programmed to fly autonomously along a user defined grid to systematically capture high-resolution photographs, even in difficult to access areas. The photographs are subsequently processed using software that employ SIFT (scale invariant feature transform) and SFM (structure from motion) algorithms. These photogrammetric routines allow the extraction of spatial information (3D point clouds, digital elevation models, 3D meshes, orthophotos) from 2D images. Depending on flight altitude and camera setup, sub-centimeter spatial resolutions can be achieved. By "digitally mapping" georeferenced 3D models and images, orientation data can be extracted directly and used to analyse the structural framework of the mapped object or area. We present UAV assisted aerial mapping results from a coastal platform near Cape Liptrap (Victoria, Australia), where deformed metasediments of the Palaeozoic Lachlan Fold Belt are exposed. We also show how orientation and spatial information of brittle and ductile structures extracted from the photogrammetric model can be linked to the progressive development of folds and faults in the region. Even though there are both technical and legislative limitations, which might prohibit the use of UAVs without prior commercial licensing and training, the benefits that arise from the resulting high-resolution, photorealistic models can substantially contribute to the collection of new data and insights for applications in structural geology.
Monitoring Seabirds and Marine Mammals by Georeferenced Aerial Photography
NASA Astrophysics Data System (ADS)
Kemper, G.; Weidauer, A.; Coppack, T.
2016-06-01
The assessment of anthropogenic impacts on the marine environment is challenged by the accessibility, accuracy and validity of biogeographical information. Offshore wind farm projects require large-scale ecological surveys before, during and after construction, in order to assess potential effects on the distribution and abundance of protected species. The robustness of site-specific population estimates depends largely on the extent and design of spatial coverage and the accuracy of the applied census technique. Standard environmental assessment studies in Germany have so far included aerial visual surveys to evaluate potential impacts of offshore wind farms on seabirds and marine mammals. However, low flight altitudes, necessary for the visual classification of species, disturb sensitive bird species and also hold significant safety risks for the observers. Thus, aerial surveys based on high-resolution digital imagery, which can be carried out at higher (safer) flight altitudes (beyond the rotor-swept zone of the wind turbines) have become a mandatory requirement, technically solving the problem of distant-related observation bias. A purpose-assembled imagery system including medium-format cameras in conjunction with a dedicated geo-positioning platform delivers series of orthogonal digital images that meet the current technical requirements of authorities for surveying marine wildlife at a comparatively low cost. At a flight altitude of 425 m, a focal length of 110 mm, implemented forward motion compensation (FMC) and exposure times ranging between 1/1600 and 1/1000 s, the twin-camera system generates high quality 16 bit RGB images with a ground sampling distance (GSD) of 2 cm and an image footprint of 155 x 410 m. The image files are readily transferrable to a GIS environment for further editing, taking overlapping image areas and areas affected by glare into account. The imagery can be routinely screened by the human eye guided by purpose-programmed software to distinguish biological from non-biological signals. Each detected seabird or marine mammal signal is identified to species level or assigned to a species group and automatically saved into a geo-database for subsequent quality assurance, geo-statistical analyses and data export to third-party users. The relative size of a detected object can be accurately measured which provides key information for species-identification. During the development and testing of this system until 2015, more than 40 surveys have produced around 500.000 digital aerial images, of which some were taken in specially protected areas (SPA) of the Baltic Sea and thus include a wide range of relevant species. Here, we present the technical principles of this comparatively new survey approach and discuss the key methodological challenges related to optimizing survey design and workflow in view of the pending regulatory requirements for effective environmental impact assessments.
Drogue tracking using 3D flash lidar for autonomous aerial refueling
NASA Astrophysics Data System (ADS)
Chen, Chao-I.; Stettner, Roger
2011-06-01
Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.
NASA Astrophysics Data System (ADS)
Hidayat, Husnul; Cahyono, A. B.
2016-11-01
Singosaritemple is one of cultural heritage building in East Java, Indonesia which was built in 1300s and restorated in 1934-1937. Because of its history and importance, complete documentation of this temple is required. Nowadays with the advent of low cost UAVs combining aerial photography with terrestrial photogrammetry gives more complete data for 3D documentation. This research aims to make complete 3D model of this landmark from aerial and terrestrial photographs with Structure from Motion algorithm. To establish correct scale, position, and orientation, the final 3D model was georeferenced with Ground Control Points in UTM 49S coordinate system. The result shows that all facades, floor, and upper structures can be modeled completely in 3D. In terms of 3D coordinate accuracy, the Root Mean Square Errors (RMSEs) are RMSEx=0,041 m; RMSEy=0,031 m; RMSEz=0,049 m which represent 0.071 m displacement in 3D space. In addition the mean difference of lenght measurements of the object is 0,057 m. With this accuracy, this method can be used to map the site up to 1:237 scale. Although the accuracy level is still in centimeters, the combined aerial and terrestrial photographs with Structure from Motion algorithm can provide complete and visually interesting 3D model.
NASA Astrophysics Data System (ADS)
Guyassa, Etefa; Frankl, Amaury; Zenebe, Amanuel; Lanckriet, Sil; Demissie, Biadgilgn; Zenebe, Gebreyohanis; Poesen, Jean; Nyssen, Jan
2016-04-01
In the Highlands of Northern Ethiopia, land degradation is claimed to have occurred over a long time mainly due agricultural practices and lack of land management. However, quantitative information on the long term land use, cover and management change is rare. The knowledge of such historical changes is essential for the present and future land management for sustainable development, especially in an agriculture-based economy. Hence, this study aimed to investigate the changes of land use, cover and management around Hagere Selam, Northern Ethiopia, over the last 80 years (1935 - 2014). We recovered a flight of ten aerial photographs at an approximate scale of 1:11,500, realized by the Italian Military Geographical Institute in 1935, along a mountain ridge between 13.6490°N, 39.1848°E and 13.6785°N, 39.2658°E. Jointly with Google Earth images (2014), the historical aerial photographs were used to compare changes over the long time. The point-count technique was used by overlaying a grid of 18 x 15 points (small squares) on 20 cm x 15 cm aerial photographs and on Google Earth images representing the same area. Occurrence of major land cover types (cropland, forest, grassland, shrubland, bare land, built-up areas and water body) was counted to compute their proportion in 1935 and 2014. In 1935, cropland, shrubland and built-up areas were predominant while other land cover types were not observed. On the Google Earth images, all categories were observed except forest. The results show that in both times cropland was the dominant land cover followed by shrubland. The proportion of cropland at present (70.5%) is approximately the same as in the 1930s (72%), but shrubland decreased and bare land, grassland and built-up areas have increased. Hence, the large share of cropland was maintained over the past long period without allowing for woody vegetation to expand its area, while some cropland was abandoned and converted to grassland and bare land. The increased proportion of built-up areas also explains the shrinking of shrubland. On the studied flight of aerial photographs, forests were not existing in 1935 and have not been restored until present. The increased area of open water, on the other hand, is related to the ongoing land rehabilitation activities carried out in the region. These results confirm previous studies that severe land degradation has occurred in the Highlands of Northern Ethiopia over a long time, due to early (pre-1935) cropland expansion and deforestation.
Toward Automatic Georeferencing of Archival Aerial Photogrammetric Surveys
NASA Astrophysics Data System (ADS)
Giordano, S.; Le Bris, A.; Mallet, C.
2018-05-01
Images from archival aerial photogrammetric surveys are a unique and relatively unexplored means to chronicle 3D land-cover changes over the past 100 years. They provide a relatively dense temporal sampling of the territories with very high spatial resolution. Such time series image analysis is a mandatory baseline for a large variety of long-term environmental monitoring studies. The current bottleneck for accurate comparison between epochs is their fine georeferencing step. No fully automatic method has been proposed yet and existing studies are rather limited in terms of area and number of dates. State-of-the art shows that the major challenge is the identification of ground references: cartographic coordinates and their position in the archival images. This task is manually performed, and extremely time-consuming. This paper proposes to use a photogrammetric approach, and states that the 3D information that can be computed is the key to full automation. Its original idea lies in a 2-step approach: (i) the computation of a coarse absolute image orientation; (ii) the use of the coarse Digital Surface Model (DSM) information for automatic absolute image orientation. It only relies on a recent orthoimage+DSM, used as master reference for all epochs. The coarse orthoimage, compared with such a reference, allows the identification of dense ground references and the coarse DSM provides their position in the archival images. Results on two areas and 5 dates show that this method is compatible with long and dense archival aerial image series. Satisfactory planimetric and altimetric accuracies are reported, with variations depending on the ground sampling distance of the images and the location of the Ground Control Points.
Real-Time Feature Tracking Using Homography
NASA Technical Reports Server (NTRS)
Clouse, Daniel S.; Cheng, Yang; Ansar, Adnan I.; Trotz, David C.; Padgett, Curtis W.
2010-01-01
This software finds feature point correspondences in sequences of images. It is designed for feature matching in aerial imagery. Feature matching is a fundamental step in a number of important image processing operations: calibrating the cameras in a camera array, stabilizing images in aerial movies, geo-registration of images, and generating high-fidelity surface maps from aerial movies. The method uses a Shi-Tomasi corner detector and normalized cross-correlation. This process is likely to result in the production of some mismatches. The feature set is cleaned up using the assumption that there is a large planar patch visible in both images. At high altitude, this assumption is often reasonable. A mathematical transformation, called an homography, is developed that allows us to predict the position in image 2 of any point on the plane in image 1. Any feature pair that is inconsistent with the homography is thrown out. The output of the process is a set of feature pairs, and the homography. The algorithms in this innovation are well known, but the new implementation improves the process in several ways. It runs in real-time at 2 Hz on 64-megapixel imagery. The new Shi-Tomasi corner detector tries to produce the requested number of features by automatically adjusting the minimum distance between found features. The homography-finding code now uses an implementation of the RANSAC algorithm that adjusts the number of iterations automatically to achieve a pre-set probability of missing a set of inliers. The new interface allows the caller to pass in a set of predetermined points in one of the images. This allows the ability to track the same set of points through multiple frames.
Employing UAVs to Acquire Detailed Vegetation and Bare Ground Data for Assessing Rangeland Health
NASA Astrophysics Data System (ADS)
Rango, A.; Laliberte, A.; Herrick, J. E.; Winters, C.
2007-12-01
Because of its value as a historical record (extending back to the mid 1930s), aerial photography is an important tool used in many rangeland studies. However, these historical photos are not very useful for detailed analysis of rangeland health because of inadequate spatial resolution and scheduling limitations. These issues are now being resolved by using Unmanned Aerial Vehicles (UAVs) over rangeland study areas. Spatial resolution improvements have been rapid in the last 10 years from the QuickBird satellite through improved aerial photography to the new UAV coverage and have utilized improved sensors and the more simplistic approach of low altitude flights. Our rangeland health experiments have shown that the low altitude UAV digital photography is preferred by rangeland scientists because it allows, for the first time, their identification of vegetation and land surface patterns and patches, gap sizes, bare soil percentages, and vegetation type. This hyperspatial imagery (imagery with a resolution finer than the object of interest) is obtained at about 5cm resolution by flying at an altitude of 150m above the surface of the Jornada Experimental Range in southern New Mexico. Additionally, the UAV provides improved temporal flexibility, such as flights immediately following fires, floods, and other catastrophic disturbances, because the flight capability is located near the study area and the vehicles are under the direct control of the users, eliminating the additional steps associated with budgets and contracts. There are significant challenges to improve the data to make them useful for operational agencies, namely, image distortion with inexpensive, consumer grade digital cameras, difficulty in detecting sufficient ground control points in small scenes (152m by 114m), accuracy of exterior UAV information on X,Y, Z, roll, pitch, and heading, the sheer number of images collected, and developing reliable relationships with ground-based data across a broad range of topographies and plant communities. Our efforts are currently focused on developing a complete and efficient workflow for UAV operational missions consisting of flight planning, image acquisition, image rectification and mosaicking, and image classification. The remote sensing capability is being incorporated into existing rangeland health assessment and monitoring protocols.
Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform
NASA Astrophysics Data System (ADS)
Liu, H. S.; Liao, H. M.
2015-08-01
Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.
Erik Haunreiter; Zhanfeng Liu; Jeff Mai; Zachary Heath; Lisa Fischer
2008-01-01
Effective monitoring and identification of areas of hardwood mortality is a critical component in the management of sudden oak death (SOD). From 2001 to 2005, aerial surveys covering 13.5 million acres in California were conducted to map and monitor hardwood mortality for the early detection of Phytophthora ramorum, the pathogen responsible for SOD....
NASA Astrophysics Data System (ADS)
Li, Wenzhuo; Sun, Kaimin; Li, Deren; Bai, Ting
2016-07-01
Unmanned aerial vehicle (UAV) remote sensing technology has come into wide use in recent years. The poor stability of the UAV platform, however, produces more inconsistencies in hue and illumination among UAV images than other more stable platforms. Image dodging is a process used to reduce these inconsistencies caused by different imaging conditions. We propose an algorithm for automatic image dodging of UAV images using two-dimensional radiometric spatial attributes. We use object-level image smoothing to smooth foreground objects in images and acquire an overall reference background image by relative radiometric correction. We apply the Contourlet transform to separate high- and low-frequency sections for every single image, and replace the low-frequency section with the low-frequency section extracted from the corresponding region in the overall reference background image. We apply the inverse Contourlet transform to reconstruct the final dodged images. In this process, a single image must be split into reasonable block sizes with overlaps due to large pixel size. Experimental mosaic results show that our proposed method reduces the uneven distribution of hue and illumination. Moreover, it effectively eliminates dark-bright interstrip effects caused by shadows and vignetting in UAV images while maximally protecting image texture information.
Computer 3D site model generation based on aerial images
NASA Astrophysics Data System (ADS)
Zheltov, Sergey Y.; Blokhinov, Yuri B.; Stepanov, Alexander A.; Skryabin, Sergei V.; Sibiriakov, Alexandre V.
1997-07-01
The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade from existing 2D geographic information systems. Possibility of site model generation with small details depends on two main factors: available source dataset and computer power resources. In this paper PC based technology is presented, so the scenes of middle resolution (scale of 1:1000) be constructed. Types of datasets are the gray level aerial stereo pairs of photographs (scale of 1:14000) and true color on ground photographs of buildings (scale ca.1:1000). True color terrestrial photographs are also necessary for photorealistic rendering, that in high extent improves human perception of the scene.
Configuration and Specifications of AN Unmanned Aerial Vehicle for Precision Agriculture
NASA Astrophysics Data System (ADS)
Erena, M.; Montesinos, S.; Portillo, D.; Alvarez, J.; Marin, C.; Fernandez, L.; Henarejos, J. M.; Ruiz, L. A.
2016-06-01
Unmanned Aerial Vehicles (UAVs) with multispectral sensors are increasingly attractive in geosciences for data capture and map updating at high spatial and temporal resolutions. These autonomously-flying systems can be equipped with different sensors, such as a six-band multispectral camera (Tetracam mini-MCA-6), GPS Ublox M8N, and MEMS gyroscopes, and miniaturized sensor systems for navigation, positioning, and mapping purposes. These systems can be used for data collection in precision viticulture. In this study, the efficiency of a light UAV system for data collection, processing, and map updating in small areas is evaluated, generating correlations between classification maps derived from remote sensing and production maps. Based on the comparison of the indices derived from UAVs incorporating infrared sensors with those obtained by satellites (Sentinel 2A and Landsat 8), UAVs show promise for the characterization of vineyard plots with high spatial variability, despite the low vegetative coverage of these crops. Consequently, a procedure for zoning map production based on UAV/UV images could provide important information for farmers.
Hassanein, Mohamed; El-Sheimy, Naser
2018-01-01
Over the last decade, the use of unmanned aerial vehicle (UAV) technology has evolved significantly in different applications as it provides a special platform capable of combining the benefits of terrestrial and aerial remote sensing. Therefore, such technology has been established as an important source of data collection for different precision agriculture (PA) applications such as crop health monitoring and weed management. Generally, these PA applications depend on performing a vegetation segmentation process as an initial step, which aims to detect the vegetation objects in collected agriculture fields’ images. The main result of the vegetation segmentation process is a binary image, where vegetations are presented in white color and the remaining objects are presented in black. Such process could easily be performed using different vegetation indexes derived from multispectral imagery. Recently, to expand the use of UAV imagery systems for PA applications, it was important to reduce the cost of such systems through using low-cost RGB cameras Thus, developing vegetation segmentation techniques for RGB images is a challenging problem. The proposed paper introduces a new vegetation segmentation methodology for low-cost UAV RGB images, which depends on using Hue color channel. The proposed methodology follows the assumption that the colors in any agriculture field image can be distributed into vegetation and non-vegetations colors. Therefore, four main steps are developed to detect five different threshold values using the hue histogram of the RGB image, these thresholds are capable to discriminate the dominant color, either vegetation or non-vegetation, within the agriculture field image. The achieved results for implementing the proposed methodology showed its ability to generate accurate and stable vegetation segmentation performance with mean accuracy equal to 87.29% and standard deviation as 12.5%. PMID:29670055
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
Ratio maps of iron ore deposits Atlantic City district, Wyoming
NASA Technical Reports Server (NTRS)
Vincent, R. K.
1973-01-01
Preliminary results of a spectral rationing technique are shown for a region at the southern end of the Wind River Range, Wyoming. Digital ratio graymaps and analog ratio images have been produced for the test site, but ground truth is not yet available for thorough interpretation of these products. ERTS analog ratio images were found generally better than either ERTS single-channel images or high altitude aerial photos for the discrimination of vegetation from non-vegetation in the test site region. Some linear geological features smaller than the ERTS spatial resolution are seen as well in ERTS ratio and single-channel images as in high altitude aerial photography. Geochemical information appears to be extractable from ERTS data. Good preliminary quantitative agreement between ERTS-derived ratios and laboratory-derived reflectance ratios of rocks and minerals encourage plans to use lab data as training sets for a simple ratio gating logic approach to automatic recognition maps.
NASA Technical Reports Server (NTRS)
1978-01-01
NASA remote sensing technology is being employed in archeological studies of the Anasazi Indians, who lived in New Mexico one thousand years ago. Under contract with the National Park Service, NASA's Technology Applications Center at the University of New Mexico is interpreting multispectral scanner data and demonstrating how aerospace scanning techniques can uncover features of prehistoric ruins not visible in conventional aerial photographs. The Center's initial study focused on Chaco Canyon, a pre-Columbia Anasazi site in northeastern New Mexico. Chaco Canyon is a national monument and it has been well explored on the ground and by aerial photography. But the National Park Service was interested in the potential of multispectral scanning for producing evidence of prehistoric roads, field patterns and dwelling areas not discernible in aerial photographs. The multispectral scanner produces imaging data in the invisible as well as the visible portions of the spectrum. This data is converted to pictures which bring out features not visible to the naked eye or to cameras. The Technology Applications Center joined forces with Bendix Aerospace Systems Division, Ann Arbor, Michigan, which provided a scanner-equipped airplane for mapping the Chaco Canyon area. The NASA group processed the scanner images and employed computerized image enhancement techniques to bring out additional detail.
Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.
Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi
2018-03-24
In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.
Optical quality of the living cat eye
Bonds, A. B.
1974-01-01
1. The optical quality of the living cat eye was measured under conditions similar to those of cat retinal ganglion cell experiments by recording the aerial image of a nearly monochromatic thin line of light. 2. Experiments were performed to assess the nature of the fundal reflexion of the cat eye, which was found to behave essentially as a diffuser. 3. The optical Modulation Transfer Function (MTF) was calculated from the measured aerial linespread using Fourier mathematics; the MTF of a `typical' cat eye was averaged from data collected from ten eyes. 4. The state of focus of the optical system, the pupil size and the angle of the light incident on the eye were all varied to determine their effect on image quality. 5. By using an image rotator, the aerial linespread was measured for several orientations of the line; these measurements yielded an approximation of the two-dimensional pointspread completely characterizing the optical system. 6. Evidence is reviewed to show that the optical resolution of the cat, albeit some 3-5 times worse than that of human, appears to be better than the neural resolution of its retina and its visual system as a whole. PMID:4449081
Optical quality of the living cat eye.
Bonds, A B
1974-12-01
1. The optical quality of the living cat eye was measured under conditions similar to those of cat retinal ganglion cell experiments by recording the aerial image of a nearly monochromatic thin line of light.2. Experiments were performed to assess the nature of the fundal reflexion of the cat eye, which was found to behave essentially as a diffuser.3. The optical Modulation Transfer Function (MTF) was calculated from the measured aerial linespread using Fourier mathematics; the MTF of a ;typical' cat eye was averaged from data collected from ten eyes.4. The state of focus of the optical system, the pupil size and the angle of the light incident on the eye were all varied to determine their effect on image quality.5. By using an image rotator, the aerial linespread was measured for several orientations of the line; these measurements yielded an approximation of the two-dimensional pointspread completely characterizing the optical system.6. Evidence is reviewed to show that the optical resolution of the cat, albeit some 3-5 times worse than that of human, appears to be better than the neural resolution of its retina and its visual system as a whole.
Design of an integrated aerial image sensor
NASA Astrophysics Data System (ADS)
Xue, Jing; Spanos, Costas J.
2005-05-01
The subject of this paper is a novel integrated aerial image sensor (IAIS) system suitable for integration within the surface of an autonomous test wafer. The IAIS could be used as a lithography processing monitor, affording a "wafer's eye view" of the process, and therefore facilitating advanced process control and diagnostics without integrating (and dedicating) the sensor to the processing equipment. The IAIS is composed of an aperture mask and an array of photo-detectors. In order to retrieve nanometer scale resolution of the aerial image with a practical photo-detector pixel size, we propose a design of an aperture mask involving a series of spatial phase "moving" aperture groups. We demonstrate a design example aimed at the 65nm technology node through TEMPEST simulation. The optimized, key design parameters include an aperture width in the range of 30nm, aperture thickness in the range of 70nm, and offer a spatial resolution of about 5nm, all with comfortable fabrication tolerances. Our preliminary simulation work indicates the possibility of the IAIS being applied to the immersion lithography. A bench-top far-field experiment verifies that our approach of the spatial frequency down-shift through forming large Moire patterns is feasible.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Near real-time shadow detection and removal in aerial motion imagery application
NASA Astrophysics Data System (ADS)
Silva, Guilherme F.; Carneiro, Grace B.; Doth, Ricardo; Amaral, Leonardo A.; Azevedo, Dario F. G. de
2018-06-01
This work presents a method to automatically detect and remove shadows in urban aerial images and its application in an aerospace remote monitoring system requiring near real-time processing. Our detection method generates shadow masks and is accelerated by GPU programming. To obtain the shadow masks, we converted images from RGB to CIELCh model, calculated a modified Specthem ratio, and applied multilevel thresholding. Morphological operations were used to reduce shadow mask noise. The shadow masks are used in the process of removing shadows from the original images using the illumination ratio of the shadow/non-shadow regions. We obtained shadow detection accuracy of around 93% and shadow removal results comparable to the state-of-the-art while maintaining execution time under real-time constraints.
Modelling and representation issues in automated feature extraction from aerial and satellite images
NASA Astrophysics Data System (ADS)
Sowmya, Arcot; Trinder, John
New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.
A Reassessment of the Mars Ocean Hypothesis
NASA Technical Reports Server (NTRS)
Parker, T. J.
2004-01-01
Initial work on the identification and mapping of potential ancient shorelines on Mars was based on Viking Orbiter image data (Parker et al., 1987, 1989, 1993). The Viking Orbiters were designed to locate landing site for the two landers and were not specifically intended to map the entire planet. Fortunately, they mapped the entire planet. Unfortunately, they did so at an average resolution of greater than 200m/pixel. Higher resolution images, even mosaics of interesting regions, are available, but relatively sparse. Mapping of shorelines on Earth requires both high-resolution aerial photos or satellite images and good topographic information. Three significant sources of additional data from missions subsequent to Viking are useful for reassessing the ocean hypothesis. These are: MGS MOC images; MGS MOLA topography; Odyssey THEMIS IR and VIS images; and MER surface geology at Meridiani and Gusev. Okay, my mistake: Four.
Research on detection method of UAV obstruction based on binocular vision
NASA Astrophysics Data System (ADS)
Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao
2018-04-01
For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.
EROS Main Image File: A Picture Perfect Database for Landsat Imagery and Aerial Photography.
ERIC Educational Resources Information Center
Jack, Robert F.
1984-01-01
Describes Earth Resources Observation System online database, which provides access to computerized images of Earth obtained via satellite. Highlights include retrieval system and commands, types of images, search strategies, other online functions, and interpretation of accessions. Satellite information, sources and samples of accessions, and…
Unmanned aerial systems-based remote sensing for monitoring sorghum growth and development
Shafian, Sanaz; Schnell, Ronnie; Bagavathiannan, Muthukumar; Valasek, John; Shi, Yeyin; Olsenholler, Jeff
2018-01-01
Unmanned Aerial Vehicles and Systems (UAV or UAS) have become increasingly popular in recent years for agricultural research applications. UAS are capable of acquiring images with high spatial and temporal resolutions that are ideal for applications in agriculture. The objective of this study was to evaluate the performance of a UAS-based remote sensing system for quantification of crop growth parameters of sorghum (Sorghum bicolor L.) including leaf area index (LAI), fractional vegetation cover (fc) and yield. The study was conducted at the Texas A&M Research Farm near College Station, Texas, United States. A fixed-wing UAS equipped with a multispectral sensor was used to collect image data during the 2016 growing season (April–October). Flight missions were successfully carried out at 50 days after planting (DAP; 25 May), 66 DAP (10 June) and 74 DAP (18 June). These flight missions provided image data covering the middle growth period of sorghum with a spatial resolution of approximately 6.5 cm. Field measurements of LAI and fc were also collected. Four vegetation indices were calculated using the UAS images. Among those indices, the normalized difference vegetation index (NDVI) showed the highest correlation with LAI, fc and yield with R2 values of 0.91, 0.89 and 0.58 respectively. Empirical relationships between NDVI and LAI and between NDVI and fc were validated and proved to be accurate for estimating LAI and fc from UAS-derived NDVI values. NDVI determined from UAS imagery acquired during the flowering stage (74 DAP) was found to be the most highly correlated with final grain yield. The observed high correlations between UAS-derived NDVI and the crop growth parameters (fc, LAI and grain yield) suggests the applicability of UAS for within-season data collection of agricultural crops such as sorghum. PMID:29715311
From Image Analysis to Computer Vision: Motives, Methods, and Milestones.
1998-07-01
images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision
Multispectral Remote Sensing of the Earth and Environment Using KHawk Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Gowravaram, Saket
This thesis focuses on the development and testing of the KHawk multispectral remote sensing system for environmental and agricultural applications. KHawk Unmanned Aircraft System (UAS), a small and low-cost remote sensing platform, is used as the test bed for aerial video acquisition. An efficient image geotagging and photogrammetric procedure for aerial map generation is described, followed by a comprehensive error analysis on the generated maps. The developed procedure is also used for generation of multispectral aerial maps including red, near infrared (NIR) and colored infrared (CIR) maps. A robust Normalized Difference Vegetation index (NDVI) calibration procedure is proposed and validated by ground tests and KHawk flight test. Finally, the generated aerial maps and their corresponding Digital Elevation Models (DEMs) are used for typical application scenarios including prescribed fire monitoring, initial fire line estimation, and tree health monitoring.
Adaptive Texture Synthesis for Large Scale City Modeling
NASA Astrophysics Data System (ADS)
Despine, G.; Colleu, T.
2015-02-01
Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
An improved algorithm of mask image dodging for aerial image
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi
2011-12-01
The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.
2007-01-22
requirements for the degree of Master of Science, Plan II. Approval for the Report and Comprehensive Examination: Committee: Professor S. Shankar Sastry...13 2.4 Plans for the high-level planner. . . . . . . . . . . . . . . . . . . . . . . . 14 3.1 Idealized flight for purposes of analyzing...Stamping In order to use the RMFPP algorithm, we must first motion stamp each image, i.e. de - termine the orientation and position of the camera when
Autonomous agricultural remote sensing systems with high spatial and temporal resolutions
NASA Astrophysics Data System (ADS)
Xiang, Haitao
In this research, two novel agricultural remote sensing (RS) systems, a Stand-alone Infield Crop Monitor RS System (SICMRS) and an autonomous Unmanned Aerial Vehicles (UAV) based RS system have been studied. A high-resolution digital color and multi-spectral camera was used as the image sensor for the SICMRS system. An artificially intelligent (AI) controller based on artificial neural network (ANN) and an adaptive neuro-fuzzy inference system (ANFIS) was developed. Morrow Plots corn field RS images in the 2004 and 2006 growing seasons were collected by the SICMRS system. The field site contained 8 subplots (9.14 m x 9.14 m) that were planted with corn and three different fertilizer treatments were used among those subplots. The raw RS images were geometrically corrected, resampled to 10cm resolution, removed soil background and calibrated to real reflectance. The RS images from two growing seasons were studied and 10 different vegetation indices were derived from each day's image. The result from the image processing demonstrated that the vegetation indices have temporal effects. To achieve high quality RS data, one has to utilize the right indices and capture the images at the right time in the growing season. Maximum variations among the image data set are within the V6-V10 stages, which indicated that these stages are the best period to identify the spatial variability caused by the nutrient stress in the corn field. The derived vegetation indices were also used to build yield prediction models via the linear regression method. At that point, all of the yield prediction models were evaluated by comparing the R2-value and the best index model from each day's image was picked based on the highest R 2-value. It was shown that the green normalized difference vegetation (GNDVI) based model is more sensitive to yield prediction than other indices-based models. During the VT-R4 stages, the GNDVI based models were able to explain more than 95% potential corn yield consistently for both seasons. The VT-R4 stages are the best period of time to estimate the corn yield. The SICMS system is only suitable for the RS research at a fixed location. In order to provide more flexibility of the RS image collection, a novel UAV based system has been studied. The UAV based agricultural RS system used a light helicopter platform equipped with a multi-spectral camera. The UAV control system consisted of an on-board and a ground station subsystem. For the on-board subsystem, an Extended Kalman Filter (EKF) based UAV navigation system was designed and implemented. The navigation system, using low cost inertial sensors, magnetometer, GPS and a single board computer, was capable of providing continuous estimates of UAV position and attitude at 50 Hz using sensor fusion techniques. The ground station subsystem was designed to be an interface between a human operator and the UAV to implement mission planning, flight command activation, and real-time flight monitoring. The navigation system is controlled by the ground station, and able to navigate the UAV in the air to reach the predefined waypoints and trigger the multi-spectral camera. By so doing, the aerial images at each point could be captured automatically. The developed UAV RS system can provide a maximum flexibility in crop field RS image collection. It is essential to perform the geometric correction and the geocoding before an aerial image can be used for precision farming. An automatic (no Ground Control Point (GCP) needed) UAV image georeferencing algorithm was developed. This algorithm can do the automatic image correction and georeferencing based on the real-time navigation data and a camera lens distortion model. The accuracy of the georeferencing algorithm was better than 90 cm according to a series test. The accuracy that has been achieved indicates that, not only is the position solution good, but the attitude error is extremely small. The waypoints planning for UAV flight was investigated. It suggested that a 16.5% forward overlap and a 15% lateral overlap were required to avoiding missing desired mapping area when the UAV flies above 45 m high with 4.5 mm lens. A whole field mosaic image can be generated according to the individual image georeferencing information. A 0.569 m mosaic error has been achieved and this accuracy is sufficient for many of the intended precision agricultural applications. With careful interpretation, the UAV images are an excellent source of high spatial and temporal resolution data for precision agricultural applications. (Abstract shortened by UMI.)
A Teacher's Introduction to Remote Sensing.
ERIC Educational Resources Information Center
Kirman, Joseph M.
1997-01-01
Defines remote sensing as the examination of something without touching it. Generally, this refers to satellite and aerial photographic images. Discusses how this technology and resulting knowledge can be integrated into geography classes. Includes a sample unit using images. (MJP)
NASA Astrophysics Data System (ADS)
Matikainen, Leena; Karila, Kirsi; Hyyppä, Juha; Litkey, Paula; Puttonen, Eetu; Ahokas, Eero
2017-06-01
During the last 20 years, airborne laser scanning (ALS), often combined with passive multispectral information from aerial images, has shown its high feasibility for automated mapping processes. The main benefits have been achieved in the mapping of elevated objects such as buildings and trees. Recently, the first multispectral airborne laser scanners have been launched, and active multispectral information is for the first time available for 3D ALS point clouds from a single sensor. This article discusses the potential of this new technology in map updating, especially in automated object-based land cover classification and change detection in a suburban area. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from an object-based random forests analysis suggest that the multispectral ALS data are very useful for land cover classification, considering both elevated classes and ground-level classes. The overall accuracy of the land cover classification results with six classes was 96% compared with validation points. The classes under study included building, tree, asphalt, gravel, rocky area and low vegetation. Compared to classification of single-channel data, the main improvements were achieved for ground-level classes. According to feature importance analyses, multispectral intensity features based on several channels were more useful than those based on one channel. Automatic change detection for buildings and roads was also demonstrated by utilising the new multispectral ALS data in combination with old map vectors. In change detection of buildings, an old digital surface model (DSM) based on single-channel ALS data was also used. Overall, our analyses suggest that the new data have high potential for further increasing the automation level in mapping. Unlike passive aerial imaging commonly used in mapping, the multispectral ALS technology is independent of external illumination conditions, and there are no shadows on intensity images produced from the data. These are significant advantages in developing automated classification and change detection procedures.
DoD Comprehensive Military Unmanned Aerial Vehicle Smart Device Ground Control Station Threat Model
2015-04-01
design , imple- mentation, and test evaluation were interviewed to evaluate the existing gaps in the DoD processes for cybersecurity. This group exposed...such as antenna design and signal reception have made satellite communication networks a viable solution for smart devices on the battlefield...DoD Comprehensive Military Unmanned AERIAL VEHICLE SMART DEVICE GROUND CONTROL STATION THREAT MODEL Image designed by Diane Fleischer Report
2. AERIAL VIEW OF MINUTEMAN SILOS. Low oblique aerial view ...
2. AERIAL VIEW OF MINUTEMAN SILOS. Low oblique aerial view (original in color) of the two launch silos, covered. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Missile Silo Type, Test Area 1-100, northeast end of Test Area 1-100 Road, Boron, Kern County, CA
NASA Astrophysics Data System (ADS)
Huang, Haifeng; Long, Jingjing; Yi, Wu; Yi, Qinglin; Zhang, Guodong; Lei, Bangjun
2017-11-01
In recent years, unmanned aerial vehicles (UAVs) have become widely used in emergency investigations of major natural hazards over large areas; however, UAVs are less commonly employed to investigate single geo-hazards. Based on a number of successful investigations in the Three Gorges Reservoir area, China, a complete UAV-based method for performing emergency investigations of single geo-hazards is described. First, a customized UAV system that consists of a multi-rotor UAV subsystem, an aerial photography subsystem, a ground control subsystem and a ground surveillance subsystem is described in detail. The implementation process, which includes four steps, i.e., indoor preparation, site investigation, on-site fast processing and application, and indoor comprehensive processing and application, is then elaborated, and two investigation schemes, automatic and manual, that are used in the site investigation step are put forward. Moreover, some key techniques and methods - e.g., the layout and measurement of ground control points (GCPs), route planning, flight control and image collection, and the Structure from Motion (SfM) photogrammetry processing - are explained. Finally, three applications are given. Experience has shown that using UAVs for emergency investigation of single geo-hazards greatly reduces the time, intensity and risks associated with on-site work and provides valuable, high-accuracy, high-resolution information that supports emergency responses.
Graph-based urban scene analysis using symbolic data
NASA Astrophysics Data System (ADS)
Moissinac, Henri; Maitre, Henri; Bloch, Isabelle
1995-07-01
A framework is presented for the interpretation of a urban landscape based on the analysis of aerial pictures. This method has been designed for the use of a priori knowledge provided by a geographic map in order to improve the image analysis stage. A coherent final interpretation of the studied area is proposed. It relies on a graph based data structure to modelize the urban landscape, and on a global uncertainty management to evaluate the final confidence we can have in the results presented. This structure and uncertainty management tend to reflect the hierarchy of the available data and the interpretation levels.
NASA Technical Reports Server (NTRS)
Oommen, Thomas; Rebbapragada, Umaa; Cerminaro, Daniel
2012-01-01
In this study, we perform a case study on imagery from the Haiti earthquake that evaluates a novel object-based approach for characterizing earthquake induced surface effects of liquefaction against a traditional pixel based change technique. Our technique, which combines object-oriented change detection with discriminant/categorical functions, shows the power of distinguishing earthquake-induced surface effects from changes in buildings using the object properties concavity, convexity, orthogonality and rectangularity. Our results suggest that object-based analysis holds promise in automatically extracting earthquake-induced damages from high-resolution aerial/satellite imagery.
Detecting blind building façades from highly overlapping wide angle aerial imagery
NASA Astrophysics Data System (ADS)
Burochin, Jean-Pascal; Vallet, Bruno; Brédif, Mathieu; Mallet, Clément; Brosset, Thomas; Paparoditis, Nicolas
2014-10-01
This paper deals with the identification of blind building façades, i.e. façades which have no openings, in wide angle aerial images with a decimeter pixel size, acquired by nadir looking cameras. This blindness characterization is in general crucial for real estate estimation and has, at least in France, a particular importance on the evaluation of legal permission of constructing on a parcel due to local urban planning schemes. We assume that we have at our disposal an aerial survey with a relatively high stereo overlap along-track and across-track and a 3D city model of LoD 1, that can have been generated with the input images. The 3D model is textured with the aerial imagery by taking into account the 3D occlusions and by selecting for each façade the best available resolution texture seeing the whole façade. We then parse all 3D façades textures by looking for evidence of openings (windows or doors). This evidence is characterized by a comprehensive set of basic radiometric and geometrical features. The blindness prognostic is then elaborated through an (SVM) supervised classification. Despite the relatively low resolution of the images, we reach a classification accuracy of around 85% on decimeter resolution imagery with 60 × 40 % stereo overlap. On the one hand, we show that the results are very sensitive to the texturing resampling process and to vegetation presence on façade textures. On the other hand, the most relevant features for our classification framework are related to texture uniformity and horizontal aspect and to the maximal contrast of the opening detections. We conclude that standard aerial imagery used to build 3D city models can also be exploited to some extent and at no additional cost for facade blindness characterisation.
NASA Astrophysics Data System (ADS)
Chang, Kuo-Jen; Huang, Mei-Jen; Tseng, Chih-Ming
2016-04-01
Taiwan, due to the high seismicity and high annual rainfall, numerous landslides triggered every year and severe impacts affect the island. Concerning to the catastrophic landslides, the key information of landslide, including range of landslide, volume estimation and the subsequent evolution are important when analyzing the triggering mechanism, hazard assessment and mitigation. Thus, the morphological analysis gives a general overview for the landslides and been considered as one of the most fundamental information. Typhoon Morakot brought extreme and long-time rainfall for Taiwan in August 2009, and caused severe disasters. In this study we integrate several technologies, especially by Unmanned Aerial Vehicle (UAV) and multi-spectral camera, to decipher the consequence and the potential hazard, and the social impact. In recent years, the remote sensing technology improves rapidly, providing a wide range of image, essential and precise information. This study integrates several methods, including, 1) Remote-sensing images gathered by Unmanned Aerial Vehicle (UAV) and by aerial photos taken in different periods; 2) field in-situ geologic investigation; 3) Differential GPS, RTK GPS geomatic measurements. The methods allow to constructing the DTMs before and after landslide, as well as the subsequent periods by using aerial photos and UAV derived images. The data sets permits to analysis the morphological changes. In the past, the study of sediment budgets usually relies on field investigation, but due to inconvenient transportation, topographical barriers, or located in remote areas, etc. the survey is hardly to be completed sometimes. In recent years, the rapid development of remote sensing technology improves image resolution and quality significantly. Remote sensing technology can provide a wide range of image data, and provide essential and precious information. The purpose of this study is to investigate the phenomenon of river migration and to evaluate the amount of migration along Laishe River by analyzing the 3D DEM before and after the typhoon Morakot. The DEMs are built by using the aerial images taken by digital mapping camera (DMC) and by airborne digital scanner 40 (ADS40) before and after typhoon event. Recently, this research integrates Unmanned Aerial Vehicle (UAV) and oblique photogrammetric technologies for image acquisition by 5-10cm GSD photos. This approach permits to construct true 3D model so as to decipher ground information more realistically. 10-20cm DSM and DEM, and field GPS, were compiled together to decipher the morphologic changes. All the information, especially by means of true 3D model, the datasets provides detail ground information that may use to evaluate the landslide triggering mechanism and river channel evolution. The goals of this study is to integrates the UAS system and to decipher the sliding process and morphologic changes of large landslide areas, sediment transport and budgets, and to investigate the phenomenon of river migration. The results of this study provides not only geomatics and GIS dataset of the hazards, but also for essential geomorphologic information for other study, and for hazard mitigation and planning, as well.
Aerial projection of three-dimensional motion pictures by electro-holography and parabolic mirrors.
Kakue, Takashi; Nishitsuji, Takashi; Kawashima, Tetsuya; Suzuki, Keisuke; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2015-07-08
We demonstrate an aerial projection system for reconstructing 3D motion pictures based on holography. The system consists of an optical source, a spatial light modulator corresponding to a display and two parabolic mirrors. The spatial light modulator displays holograms calculated by computer and can reconstruct holographic motion pictures near the surface of the modulator. The two parabolic mirrors can project floating 3D images of the motion pictures formed by the spatial light modulator without mechanical scanning or rotating. In this demonstration, we used a phase-modulation-type spatial light modulator. The number of pixels and the pixel pitch of the modulator were 1,080 × 1,920 and 8.0 μm × 8.0 μm, respectively. The diameter, the height and the focal length of each parabolic mirror were 288 mm, 55 mm and 100 mm, respectively. We succeeded in aerially projecting 3D motion pictures of size ~2.5 mm(3) by this system constructed by the modulator and mirrors. In addition, by applying a fast computational algorithm for holograms, we achieved hologram calculations at ~12 ms per hologram with 4 CPU cores.
Aerial projection of three-dimensional motion pictures by electro-holography and parabolic mirrors
Kakue, Takashi; Nishitsuji, Takashi; Kawashima, Tetsuya; Suzuki, Keisuke; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2015-01-01
We demonstrate an aerial projection system for reconstructing 3D motion pictures based on holography. The system consists of an optical source, a spatial light modulator corresponding to a display and two parabolic mirrors. The spatial light modulator displays holograms calculated by computer and can reconstruct holographic motion pictures near the surface of the modulator. The two parabolic mirrors can project floating 3D images of the motion pictures formed by the spatial light modulator without mechanical scanning or rotating. In this demonstration, we used a phase-modulation-type spatial light modulator. The number of pixels and the pixel pitch of the modulator were 1,080 × 1,920 and 8.0 μm × 8.0 μm, respectively. The diameter, the height and the focal length of each parabolic mirror were 288 mm, 55 mm and 100 mm, respectively. We succeeded in aerially projecting 3D motion pictures of size ~2.5 mm3 by this system constructed by the modulator and mirrors. In addition, by applying a fast computational algorithm for holograms, we achieved hologram calculations at ~12 ms per hologram with 4 CPU cores. PMID:26152453
Aerial video mosaicking using binary feature tracking
NASA Astrophysics Data System (ADS)
Minnehan, Breton; Savakis, Andreas
2015-05-01
Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point's image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). PMID:23483997
Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow.
Zhang, Weilong; Guo, Bingxuan; Li, Ming; Liao, Xuan; Li, Wenzhuo
2018-04-16
Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images.
Construction of a small and lightweight hyperspectral imaging system
NASA Astrophysics Data System (ADS)
Vogel, Britta; Hünniger, Dirk; Bastian, Georg
2014-05-01
The analysis of the reflected sunlight offers great opportunity to gain information about the environment, including vegetation and soil. In the case of plants the wavelength ratio of the reflected light usually undergoes a change if the state of growth or state of health changes. So the measurement of the reflected light allows drawing conclusions about the state of, amongst others, vegetation. Using a hyperspectral imaging system for data acquisition leads to a large dataset, which can be evaluated with respect to several different questions to obtain various information by one measurement. Based on commercially available plain optical components we developed a small and lightweight hyperspectral imaging system within the INTERREG IV A-Project SMART INSPECTORS. The project SMART INSPECTORS [Smart Aerial Test Rigs with Infrared Spectrometers and Radar] deals with the fusion of airborne visible and infrared imaging remote sensing instruments and wireless sensor networks for precision agriculture and environmental research. A high performance camera was required in terms of good signal, good wavelength resolution and good spatial resolution, while severe constraints of size, proportions and mass had to be met due to the intended use on small unmanned aerial vehicles. The detector was chosen to operate without additional cooling. The refractive and focusing optical components were identified by supporting works with an optical raytracing software and a self-developed program. We present details of design and construction of our camera system, test results to confirm the optical simulation predictions as well as our first measurements.
Topography changes monitoring of small islands using camera drone
NASA Astrophysics Data System (ADS)
Bang, E.
2017-12-01
Drone aerial photogrammetry was conducted for monitoring topography changes of small islands in the east sea of Korea. Severe weather and sea wave is eroding the islands and sometimes cause landslide and falling rock. Due to rugged cliffs in all direction and bad accessibility, ground based survey methods are less efficient in monitoring topography changes of the whole area. Camera drones can provide digital images and movie in every corner of the islands, and drone aerial photogrammetry is powerful to get precise digital surface model (DSM) for a limited area. We have got a set of digital images to construct a textured 3D model of the project area every year since 2014. Flight height is in less than 100m from the top of those islands to get enough ground sampling distance (GSD). Most images were vertically captured with automatic flights, but we also flied drones around the islands with about 30°-45° camera angle for constructing 3D model better. Every digital image has geo-reference, but we set several ground control points (GCPs) on the islands and their coordinates were measured with RTK surveying methods to increase the absolute accuracy of the project. We constructed 3D textured model using photogrammetry tool, which generates 3D spatial information from digital images. From the polygonal model, we could get DSM with contour lines. Thematic maps such as hill shade relief map, aspect map and slope map were also processed. Those maps make us understand topography condition of the project area better. The purpose of this project is monitoring topography change of these small islands. Elevation difference map between DSMs of each year is constructed. There are two regions showing big negative difference value. By comparing constructed textured models and captured digital images around these regions, it is checked that a region have experienced real topography change. It is due to huge rock fall near the center of the east island. The size of fallen rock can be measured on the digital model exactly, which is about 13m*6m*2m (height*width*thickness). We believe that drone aerial photogrammetry can be an efficient topography changes detection method for a complicated terrain area.
Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm
NASA Astrophysics Data System (ADS)
Foroutan, M.; Zimbelman, J. R.
2017-09-01
Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.
Moving object detection in top-view aerial videos improved by image stacking
NASA Astrophysics Data System (ADS)
Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen
2017-08-01
Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.
Pasadena, California Perspective View with Aerial Photo and Landsat Overlay
NASA Technical Reports Server (NTRS)
2000-01-01
This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada-Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U. S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory, is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene.This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons.For a full-resolution, annotated version of this image, please select Figure 1, below: [figure removed for brevity, see original site] The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the German (DLR) and Italian (ASI) space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.Size: 5.8 km (3.6 miles) x 10 km (6.2 miles) Location: 34.16 deg. North lat., 118.16 deg. West lon. Orientation: Looking North Original Data Resolution: SRTM, 30 meters; Landsat, 30 meters; Aerial Photo, 3 meters (no vertical exaggeration) Date Acquired: February 16, 2000USDA-ARS?s Scientific Manuscript database
Thermal imaging has many potential uses from aerial platforms. A thermal imaging camera was brought into service to detect potential leakage and sand boils at the Mississippi River levee during the flood period of April and May, 2011. This camera was mounted on an agricultural aircraft and operated ...
Investigating an Aerial Image First
ERIC Educational Resources Information Center
Wyrembeck, Edward P.; Elmer, Jeffrey S.
2006-01-01
Most introductory optics lab activities begin with students locating the real image formed by a converging lens. The method is simple and straightforward--students move a screen back and forth until the real image is in sharp focus on the screen. Students then draw a simple ray diagram to explain the observation using only two or three special…
NASA Astrophysics Data System (ADS)
Xie, Bing; Duan, Zhemin; Chen, Yu
2017-11-01
The mode of navigation based on scene match can assist UAV to achieve autonomous navigation and other missions. However, aerial multi-frame images of the UAV in the complex flight environment easily be affected by the jitter, noise and exposure, which will lead to image blur, deformation and other issues, and result in the decline of detection rate of the interested regional target. Aiming at this problem, we proposed a kind of Graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation. Experimental results prove the validity and accuracy of the proposed algorithm.
3D exploitation of large urban photo archives
NASA Astrophysics Data System (ADS)
Cho, Peter; Snavely, Noah; Anderson, Ross
2010-04-01
Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.
Kim, In-Ho; Jeon, Haemin; Baek, Seung-Chan; Hong, Won-Hwa; Jung, Hyung-Jo
2018-06-08
Bridge inspection using unmanned aerial vehicles (UAV) with high performance vision sensors has received considerable attention due to its safety and reliability. As bridges become obsolete, the number of bridges that need to be inspected increases, and they require much maintenance cost. Therefore, a bridge inspection method based on UAV with vision sensors is proposed as one of the promising strategies to maintain bridges. In this paper, a crack identification method by using a commercial UAV with a high resolution vision sensor is investigated in an aging concrete bridge. First, a point cloud-based background model is generated in the preliminary flight. Then, cracks on the structural surface are detected with the deep learning algorithm, and their thickness and length are calculated. In the deep learning method, region with convolutional neural networks (R-CNN)-based transfer learning is applied. As a result, a new network for the 384 collected crack images of 256 × 256 pixel resolution is generated from the pre-trained network. A field test is conducted to verify the proposed approach, and the experimental results proved that the UAV-based bridge inspection is effective at identifying and quantifying the cracks on the structures.
NASA Astrophysics Data System (ADS)
Dąbski, Maciej; Zmarz, Anna; Pabjanek, Piotr; Korczak-Abshire, Małgorzata; Karsznia, Izabela; Chwedorzewska, Katarzyna J.
2017-08-01
High-resolution aerial images allow detailed analyses of periglacial landforms, which is of particular importance in light of climate change and resulting changes in active layer thickness. The aim of this study is to show possibilities of using UAV-based photography to perform spatial analysis of periglacial landforms on the Demay Point peninsula, King George Island, and hence to supplement previous geomorphological studies of the South Shetland Islands. Photogrammetric flights were performed using a PW-ZOOM fixed-winged unmanned aircraft vehicle. Digital elevation models (DEM) and maps of slope and contour lines were prepared in ESRI ArcGIS 10.3 with the Spatial Analyst extension, and three-dimensional visualizations in ESRI ArcScene 10.3 software. Careful interpretation of orthophoto and DEM, allowed us to vectorize polygons of landforms, such as (i) solifluction landforms (solifluction sheets, tongues, and lobes); (ii) scarps, taluses, and a protalus rampart; (iii) patterned ground (hummocks, sorted circles, stripes, nets and labyrinths, and nonsorted nets and stripes); (iv) coastal landforms (cliffs and beaches); (v) landslides and mud flows; and (vi) stone fields and bedrock outcrops. We conclude that geomorphological studies based on commonly accessible aerial and satellite images can underestimate the spatial extent of periglacial landforms and result in incomplete inventories. The PW-ZOOM UAV is well suited to gather detailed geomorphological data and can be used in spatial analysis of periglacial landforms in the Western Antarctic Peninsula region.
NASA’s Aerial Survey of Polar Ice Expands Its Arctic Reach
2017-12-08
For the past eight years, Operation IceBridge, a NASA mission that conducts aerial surveys of polar ice, has produced unprecedented three-dimensional views of Arctic and Antarctic ice sheets, providing scientists with valuable data on how polar ice is changing in a warming world. Now, for the first time, the campaign will expand its reach to explore the Arctic’s Eurasian Basin through two research flights based out of Svalbard, a Norwegian archipelago in the northern Atlantic Ocean. More: go.nasa.gov/2ngAxX2 Credits: NASA/Nathan Kurtz NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle
Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; ...
2012-09-17
During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis.more » The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).« less
An evaluation of a UAV guidance system with consumer grade GPS receivers
NASA Astrophysics Data System (ADS)
Rosenberg, Abigail Stella
Remote sensing has been demonstrated an important tool in agricultural and natural resource management and research applications, however there are limitations that exist with traditional platforms (i.e., hand held sensors, linear moves, vehicle mounted, airplanes, remotely piloted vehicles (RPVs), unmanned aerial vehicles (UAVs) and satellites). Rapid technological advances in electronics, computers, software applications, and the aerospace industry have dramatically reduced the cost and increased the availability of remote sensing technologies. Remote sensing imagery vary in spectral, spatial, and temporal resolutions and are available from numerous providers. Appendix A presented results of a test project that acquired high-resolution aerial photography with a RPV to map the boundary of a 0.42 km2 fire area. The project mapped the boundaries of the fire area from a mosaic of the aerial images collected and compared this with ground-based measurements. The project achieved a 92.4% correlation between the aerial assessment and the ground truth data. Appendix B used multi-objective analysis to quantitatively assess the tradeoffs between different sensor platform attributes to identify the best overall technology. Experts were surveyed to identify the best overall technology at three different pixel sizes. Appendix C evaluated the positional accuracy of a relatively low cost UAV designed for high resolution remote sensing of small areas in order to determine the positional accuracy of sensor readings. The study evaluated the accuracy and uncertainty of a UAV flight route with respect to the programmed waypoints and of the UAV's GPS position, respectively. In addition, the potential displacement of sensor data was evaluated based on (1) GPS measurements on board the aircraft and (2) the autopilot's circuit board with 3-axis gyros and accelerometers (i.e., roll, pitch, and yaw). The accuracies were estimated based on a 95% confidence interval or similar methods. The accuracy achieved in the second and third manuscripts demonstrates that reasonably priced, high resolution remote sensing via RPVs and UAVs is practical for agriculture and natural resource professionals.
NASA Technical Reports Server (NTRS)
Clarke, V. C., Jr.
1978-01-01
The capability of a remotely piloted airplane as a Mars exploration vehicle in the aerial survey mode is assessed. Specific experiment areas covered include: visual imaging; gamma ray and infrared reflectance spectroscopy; gravity field; magnetic field and electromagnetic sounding; and atmospheric composition and dynamics. It is concluded that (1) the most important use of a plane in the aerial survey mode would be in topical studies and returned sample site characterization; (2) the airplane offers the unique capability to do high resolution, oblique imaging, and repeated profile measurements in the atmospheric boundary layer; and (3) it offers the best platform from which to do electromagnetic sounding.
Accurate Inventories Of Irrigated Land
NASA Technical Reports Server (NTRS)
Wall, S.; Thomas, R.; Brown, C.
1992-01-01
System for taking land-use inventories overcomes two problems in estimating extent of irrigated land: only small portion of large state surveyed in given year, and aerial photographs made on 1 day out of year do not provide adequate picture of areas growing more than one crop per year. Developed for state of California as guide to controlling, protecting, conserving, and distributing water within state. Adapted to any large area in which large amounts of irrigation water needed for agriculture. Combination of satellite images, aerial photography, and ground surveys yields data for computer analysis. Analyst also consults agricultural statistics, current farm reports, weather reports, and maps. These information sources aid in interpreting patterns, colors, textures, and shapes on Landsat-images.
NASA Astrophysics Data System (ADS)
Gündoğan, R.; Alma, V.; Dindaroğlu, T.; Günal, H.; Yakupoğlu, T.; Susam, T.; Saltalı, K.
2017-11-01
Calculation of gullies by remote sensing images obtained from satellite or aerial platforms is often not possible because gullies in agricultural fields, defined as the temporary gullies are filled in a very short time with tillage operations. Therefore, fast and accurate estimation of sediment loss with the temporary gully erosion is of great importance. In this study, it is aimed to monitor and calculate soil losses caused by the gully erosion that occurs in agricultural areas with low altitude unmanned aerial vehicles. According to the calculation with Pix4D, gully volume was estimated to be 10.41 m3 and total loss of soil was estimated to be 14.47 Mg. The RMSE value of estimations was found to be 0.89. The results indicated that unmanned aerial vehicles could be used in predicting temporary gully erosion and losses of soil.
Photocopy of recent aerial photograph (from U.S. Army Support Command ...
Photocopy of recent aerial photograph (from U.S. Army Support Command Hawaii, Wheeler Army Air Base, Hawaii) Photographer unknown, Circa 1990 OBLIQUE AERIAL VIEW SHOWING MAIN SECTION OF BASE WITH LAKE WILSON IN THE FOREGROUND AND WAIANAE MOUNTAINS IN THE BACKGROUND. - Schofield Barracks Military Reservation, Wilikina Drive & Kunia Road, Wahiawa, Honolulu County, HI
Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views
2014-11-10
collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video
Automated Camera Array Fine Calibration
NASA Technical Reports Server (NTRS)
Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang
2008-01-01
Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.
Rocky Mountain Arsenal, Sections 26 and 25 Contamination Survey. Phase 1
1987-12-01
mapping specifications for scale, overlap, density, and image quality. Utilizing the aerial photography and ground control described above, orthophoto ...base maps with superimposed contours will be prepared. £ 3-2 RMA06-D.1/TPGEO 1.3 11/20/87 Orthophoto negatives will be prepared directly at the final...cdial lnvestigation/Feasibiliy Studv (RI/FS) at tile Rocky Mountain Arsenal. Tasks 4 and 6 were prepared by ’Environmental Science and Engineering (ESE
2008-01-01
Figure 11. Screenshot of OrthoPro seam lines (pink), tiles (blue), and photos (green)................ 26 Figure 12. Calibration craters (existing...with aerial targets for the orthophotography data collection, 1 per data collection tile (1 sq km). For the Phase I data collection, 9 LiDAR ground...Orthophotography data were collected concurrently with the LiDAR data collection. Based on the LiDAR flight line spacing parameters, the orthophoto images were
Remote Sensing Applied to Geology (Latest Citations from the Aerospace Database)
NASA Technical Reports Server (NTRS)
1996-01-01
The bibliography contains citations concerning the use of remote sensing in geological resource exploration. Technologies discussed include thermal, optical, photographic, and electronic imaging using ground-based, aerial, and satellite-borne devices. Analog and digital techniques to locate, classify, and assess geophysical features, structures, and resources are also covered. Application of remote sensing to petroleum and minerals exploration is treated in a separate bibliography. (Contains 50-250 citations and includes a subject term index and title list.)
NASA Technical Reports Server (NTRS)
Marrs, R. W.; Evans, M. A.
1974-01-01
The author has identified the following significant results. The crop types of a Great Plains study area were mapped from color infrared aerial photography. Each field was positively identified from field checks in the area. Enlarged (50x) density contour maps were constructed from three ERTS-1 images taken in the summer of 1973. The map interpreted from the aerial photography was compared to the density contour maps and the accuracy of the ERTS-1 density contour map interpretations were determined. Changes in the vegetation during the growing season and harvest periods were detectable on the ERTS-1 imagery. Density contouring aids in the detection of such charges.
NASA Astrophysics Data System (ADS)
Vogels, M. F. A.; de Jong, S. M.; Sterk, G.; Addink, E. A.
2017-02-01
Land-use and land-cover (LULC) conversions have an important impact on land degradation, erosion and water availability. Information on historical land cover (change) is crucial for studying and modelling land- and ecosystem degradation. During the past decades major LULC conversions occurred in Africa, Southeast Asia and South America as a consequence of a growing population and economy. Most distinct is the conversion of natural vegetation into cropland. Historical LULC information can be derived from satellite imagery, but these only date back until approximately 1972. Before the emergence of satellite imagery, landscapes were monitored by black-and-white (B&W) aerial photography. This photography is often visually interpreted, which is a very time-consuming approach. This study presents an innovative, semi-automated method to map cropland acreage from B&W photography. Cropland acreage was mapped on two study sites in Ethiopia and in The Netherlands. For this purpose we used Geographic Object-Based Image Analysis (GEOBIA) and a Random Forest classification on a set of variables comprising texture, shape, slope, neighbour and spectral information. Overall mapping accuracies attained are 90% and 96% for the two study areas respectively. This mapping method increases the timeline at which historical cropland expansion can be mapped purely from brightness information in B&W photography up to the 1930s, which is beneficial for regions where historical land-use statistics are mostly absent.
Aerial imaging with manned aircraft for precision agriculture
USDA-ARS?s Scientific Manuscript database
Over the last two decades, numerous commercial and custom-built airborne imaging systems have been developed and deployed for diverse remote sensing applications, including precision agriculture. More recently, unmanned aircraft systems (UAS) have emerged as a versatile and cost-effective platform f...
The application of GPS precise point positioning technology in aerial triangulation
NASA Astrophysics Data System (ADS)
Yuan, Xiuxiao; Fu, Jianhong; Sun, Hongxing; Toth, Charles
In traditional GPS-supported aerotriangulation, differential GPS (DGPS) positioning technology is used to determine the 3-dimensional coordinates of the perspective centers at exposure time with an accuracy of centimeter to decimeter level. This method can significantly reduce the number of ground control points (GCPs). However, the establishment of GPS reference stations for DGPS positioning is not only labor-intensive and costly, but also increases the implementation difficulty of aerial photography. This paper proposes aerial triangulation supported with GPS precise point positioning (PPP) as a way to avoid the use of the GPS reference stations and simplify the work of aerial photography. Firstly, we present the algorithm for GPS PPP in aerial triangulation applications. Secondly, the error law of the coordinate of perspective centers determined using GPS PPP is analyzed. Thirdly, based on GPS PPP and aerial triangulation software self-developed by the authors, four sets of actual aerial images taken from surveying and mapping projects, different in both terrain and photographic scale, are given as experimental models. The four sets of actual data were taken over a flat region at a scale of 1:2500, a mountainous region at a scale of 1:3000, a high mountainous region at a scale of 1:32000 and an upland region at a scale of 1:60000 respectively. In these experiments, the GPS PPP results were compared with results obtained through DGPS positioning and traditional bundle block adjustment. In this way, the empirical positioning accuracy of GPS PPP in aerial triangulation can be estimated. Finally, the results of bundle block adjustment with airborne GPS controls from GPS PPP are analyzed in detail. The empirical results show that GPS PPP applied in aerial triangulation has a systematic error of half-meter level and a stochastic error within a few decimeters. However, if a suitable adjustment solution is adopted, the systematic error can be eliminated in GPS-supported bundle block adjustment. When four full GCPs are emplaced in the corners of the adjustment block, then the systematic error is compensated using a set of independent unknown parameters for each strip, the final result of the bundle block adjustment with airborne GPS controls from PPP is the same as that of bundle block adjustment with airborne GPS controls from DGPS. Although the accuracy of the former is a little lower than that of traditional bundle block adjustment with dense GCPs, it can still satisfy the accuracy requirement of photogrammetric point determination for topographic mapping at many scales.
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Landslide Mapping Using Imagery Acquired by a Fixed-Wing Uav
NASA Astrophysics Data System (ADS)
Rau, J. Y.; Jhan, J. P.; Lo, C. F.; Lin, Y. S.
2011-09-01
In Taiwan, the average annual rainfall is about 2,500 mm, about three times the world average. Hill slopes where are mostly under meta-stable conditions due to fragmented surface materials can easily be disturbed by heavy typhoon rainfall and/or earthquakes, resulting in landslides and debris flows. Thus, an efficient data acquisition and disaster surveying method is critical for decision making. Comparing with satellite and airplane, the unmanned aerial vehicle (UAV) is a portable and dynamic platform for data acquisition. In particularly when a small target area is required. In this study, a fixed-wing UAV that equipped with a consumer grade digital camera, i.e. Canon EOS 450D, a flight control computer, a Garmin GPS receiver and an attitude heading reference system (AHRS) are proposed. The adopted UAV has about two hours flight duration time with a flight control range of 20 km and has a payload of 3 kg, which is suitable for a medium scale mapping and surveying mission. In the paper, a test area with 21.3 km2 in size containing hundreds of landslides induced by Typhoon Morakot is used for landslides mapping. The flight height is around 1,400 meters and the ground sampling distance of the acquired imagery is about 17 cm. The aerial triangulation, ortho-image generation and mosaicking are applied to the acquired images in advance. An automatic landslides detection algorithm is proposed based on the object-based image analysis (OBIA) technique. The color ortho-image and a digital elevation model (DEM) are used. The ortho-images before and after typhoon are utilized to estimate new landslide regions. Experimental results show that the developed algorithm can achieve a producer's accuracy up to 91%, user's accuracy 84%, and a Kappa index of 0.87. It demonstrates the feasibility of the landslide detection algorithm and the applicability of a fixed-wing UAV for landslide mapping.
Integrating unmanned aerial systems and LSPIV for rapid, cost-effective stream gauging
NASA Astrophysics Data System (ADS)
Lewis, Quinn W.; Lindroth, Evan M.; Rhoads, Bruce L.
2018-05-01
Quantifying flow in rivers is fundamental to assessments of water supply, water quality, ecological conditions, hydrological responses to storm events, and geomorphological processes. Image-based surface velocity measurements have shown promise in extending the range of discharge conditions that can be measured in the field. The use of Unmanned Aerial Systems (UAS) in image-based measurements of surface velocities has the potential to expand applications of this method. Thus far, few investigations have assessed this potential by evaluating the accuracy and repeatability of discharge measurements using surface velocities obtained from UAS. This study uses large-scale particle image velocimetry (LSPIV) derived from videos captured by cameras on a UAS and a fixed tripod to obtain discharge measurements at ten different stream locations in Illinois, USA. Discharge values are compared to reference values measured by an acoustic Doppler current profiler, a propeller meter, and established stream gauges. The results demonstrate the effects of UAS flight height, camera steadiness and leveling accuracy, video sampling frequency, and LSPIV interrogation area size on surface velocities, and show that the mean difference between fixed and UAS cameras is less than 10%. Differences between LSPIV-derived and reference discharge values are generally less than 20%, not systematically low or high, and not related to site parameters like channel width or depth, indicating that results are relatively insensitive to camera setup and image processing parameters typically required of LSPIV. The results also show that standard velocity indices (between 0.85 and 0.9) recommended for converting surface velocities to depth-averaged velocities yield reasonable discharge estimates, but are best calibrated at specific sites. The study recommends a basic methodology for LSPIV discharge measurements using UAS that is rapid, cost-efficient, and does not require major preparatory work at a measurement location, pre- and post-processing of imagery, or extensive background in image analysis and PIV.
Delineation of marsh types from Corpus Christi Bay, Texas, to Perdido Bay, Alabama, in 2010
Enwright, Nicholas M.; Hartley, Stephen B.; Couvillion, Brady R.; Michael G. Brasher,; Jenneke M. Visser,; Michael K. Mitchell,; Bart M. Ballard,; Mark W. Parr,; Barry C. Wilson,
2015-07-23
This study incorporates about 9,800 ground reference locations collected via helicopter surveys in coastal wetland areas. Decision-tree analyses were used to classify emergent marsh vegetation types by using ground reference data from helicopter vegetation surveys and independent variables such as multitemporal satellite-based multispectral imagery from 2009 to 2011, bare-earth digital elevation models based on airborne light detection and ranging (lidar), alternative contemporary land cover classifications, and other spatially explicit variables. Image objects were created from 2010 National Agriculture Imagery Program color-infrared aerial photography. The final classification is a 10-meter raster dataset that was produced by using a majority filter to classify image objects according to the marsh vegetation type covering the majority of each image object. The classification is dated 2010 because the year is both the midpoint of the classified multitemporal satellite-based imagery (2009–11) and the date of the high-resolution airborne imagery that was used to develop image objects. The seamless classification produced through this work can be used to help develop and refine conservation efforts for priority natural resources.
Benedek, C; Descombes, X; Zerubia, J
2012-01-01
In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.
Near Real-Time Georeference of Umanned Aerial Vehicle Images for Post-Earthquake Response
NASA Astrophysics Data System (ADS)
Wang, S.; Wang, X.; Dou, A.; Yuan, X.; Ding, L.; Ding, X.
2018-04-01
The rapid collection of Unmanned Aerial Vehicle (UAV) remote sensing images plays an important role in the fast submitting disaster information and the monitored serious damaged objects after the earthquake. However, for hundreds of UAV images collected in one flight sortie, the traditional data processing methods are image stitching and three-dimensional reconstruction, which take one to several hours, and affect the speed of disaster response. If the manual searching method is employed, we will spend much more time to select the images and the find images do not have spatial reference. Therefore, a near-real-time rapid georeference method for UAV remote sensing disaster data is proposed in this paper. The UAV images are achieved georeference combined with the position and attitude data collected by UAV flight control system, and the georeferenced data is organized by means of world file which is developed by ESRI. The C # language is adopted to compile the UAV images rapid georeference software, combined with Geospatial Data Abstraction Library (GDAL). The result shows that it can realize rapid georeference of remote sensing disaster images for up to one thousand UAV images within one minute, and meets the demand of rapid disaster response, which is of great value in disaster emergency application.
Aerial spray technology: possibilities and limitations for control of pear thrips
Karl Mierzejewski
1991-01-01
The feasibility of using aerial application as a means of managing a pear thrips infestation in maple forest stands is examined, based on existing knowledge of forest aerial application acquired from theoretical and empirical studies. Specific strategies by which aerial application should be performed and potential problem areas are discussed. Two new tools, aircraft...
Environmental Changes Analysis in Bucharest City Using Corona, SPOT Hrv and Ikonos Images
NASA Astrophysics Data System (ADS)
Noaje, I.; Sion, I. G.
2012-08-01
Bucharest, capital of Romania, deals with serious difficulties as a result of urban politics: influx of people due to industrialization and development of dormitory areas, lack of a modern infrastructure, absence of coherent and long term urban development politics, continuous depletion of environment. This paper presents a multisensor study relying on multiple data sets, both analogical and digital: satellite images (Corona - 1964 panchromatic, SPOT HRV - 1994 multispctral and panchromatic, IKONOS - 2007 multispectral), aerial photographs - 1994, complementary products (topographic and thematic maps). Georeferenced basis needs to be generated to highlight changes detection. The digital elevation model is generated from aerial photography 1:5,000 scaled, acquired in 1994. First a height correction is required followed by an affine transformation to the ground control points identified both in aerial photographs and IKONOS image. SPOT-HRV pansharpened satellite image has been rectified on georeferenced IKONOS image, by an affine transformation method. The Corona panoramic negative film was scanned and rubber sheeting method is used for rectification. The first 25 years of the study period (1964-1989) are characterized by growth of industrial areas, high density apartment buildings residential areas and leisure green areas by demolition of cultural heritage areas (hundred years old churches and architectural monuments). Changes between the imagery were determined partially through visual interpretation, using elements such as location, size, shape, shadow, tone, texture, and pattern (Corona image), partially using unsupervised classification (SPOT HRV and IKONOS). The second period of 18 years (1989-2007) highlighted considerable growth of residential areas in the city neighborhood, simultaneously with the diminish of green areas and massive deforestation in confiscated areas before and returned to the original owners.
NASA Astrophysics Data System (ADS)
Phan, Khoi A.; Spence, Chris A.; Dakshina-Murthy, S.; Bala, Vidya; Williams, Alvina M.; Strener, Steve; Eandi, Richard D.; Li, Junling; Karklin, Linard
1999-12-01
As advanced process technologies in the wafer fabs push the patterning processes toward lower k1 factor for sub-wavelength resolution printing, reticles are required to use optical proximity correction (OPC) and phase-shifted mask (PSM) for resolution enhancement. For OPC/PSM mask technology, defect printability is one of the major concerns. Current reticle inspection tools available on the market sometimes are not capable of consistently differentiating between an OPC feature and a true random defect. Due to the process complexity and high cost associated with the making of OPC/PSM reticles, it is important for both mask shops and lithography engineers to understand the impact of different defect types and sizes to the printability. Aerial Image Measurement System (AIMS) has been used in the mask shops for a number of years for reticle applications such as aerial image simulation and transmission measurement of repaired defects. The Virtual Stepper System (VSS) provides an alternative method to do defect printability simulation and analysis using reticle images captured by an optical inspection or review system. In this paper, pre- programmed defects and repairs from a Defect Sensitivity Monitor (DSM) reticle with 200 nm minimum features (at 1x) will be studied for printability. The simulated resist lines by AIMS and VSS are both compared to SEM images of resist wafers qualitatively and quantitatively using CD verification.Process window comparison between unrepaired and repaired defects for both good and bad repair cases will be shown. The effect of mask repairs to resist pattern images for the binary mask case will be discussed. AIMS simulation was done at the International Sematech, Virtual stepper simulation at Zygo and resist wafers were processed at AMD-Submicron Development Center using a DUV lithographic process for 0.18 micrometer Logic process technology.
Murphy, Marilyn K.; Kowalski, Kurt P.; Grapentine, Joel L.
2010-01-01
The geocontrol template method was developed to georeference multiple, overlapping analog aerial photographs without reliance upon conventionally obtained horizontal ground control. The method was tested as part of a long-term wetland habitat restoration project at a Lake Erie coastal wetland complex in the U.S. Fish and Wildlife Service Ottawa National Wildlife Refuge. As in most coastal wetlands, annually identifiable ground-control features required to georeference photo-interpreted data are difficult to find. The geocontrol template method relies on the following four components: (a) an uncontrolled aerial photo mosaic of the study area, (b) global positioning system (GPS) derived horizontal coordinates of each photo’s principal point, (c) a geocontrol template created by the transfer of fiducial markings and calculated principal points to clear acetate from individual photographs arranged in a mosaic, and (d) the root-mean-square-error testing of the system to ensure an acceptable level of planimetric accuracy. Once created for a study area, the geocontrol template can be registered in geographic information system (GIS) software to facilitate interpretation of multiple images without individual image registration. The geocontrol template enables precise georeferencing of single images within larger blocks of photographs using a repeatable and consistent method.
NASA Astrophysics Data System (ADS)
Marcaccio, J. V.; Markle, C. E.; Chow-Fraser, P.
2015-08-01
With recent advances in technology, personal aerial imagery acquired with unmanned aerial vehicles (UAVs) has transformed the way ecologists can map seasonal changes in wetland habitat. Here, we use a multi-rotor (consumer quad-copter, the DJI Phantom 2 Vision+) UAV to acquire a high-resolution (< 8 cm) composite photo of a coastal wetland in summer 2014. Using validation data collected in the field, we determine if a UAV image and SWOOP (Southwestern Ontario Orthoimagery Project) image (collected in spring 2010) differ in their classification of type of dominant vegetation type and percent cover of three plant classes: submerged aquatic vegetation, floating aquatic vegetation, and emergent vegetation. The UAV imagery was more accurate than available SWOOP imagery for mapping percent cover of submergent and floating vegetation categories, but both were able to accurately determine the dominant vegetation type and percent cover of emergent vegetation. Our results underscore the value and potential for affordable UAVs (complete quad-copter system < 3,000 CAD) to revolutionize the way ecologists obtain imagery and conduct field research. In Canada, new UAV regulations make this an easy and affordable way to obtain multiple high-resolution images of small (< 1.0 km2) wetlands, or portions of larger wetlands throughout a year.
NASA Astrophysics Data System (ADS)
Jorgenson, J. C.; Jorgenson, M. T.; Boldenow, M.; Orndahl, K. M.
2016-12-01
We documented landscape change over a 60 year period in the Arctic National Wildlife Refuge in northeastern Alaska using aerial photographs and satellite images. We used a stratified random sample to allow inference to the whole refuge (78,050 km2), with five random sites in each of seven ecoregions. Each site (2 km2) had a systematic grid of 100 points for a total of 3500 points. We chose study sites in the overlap area covered by acceptable imagery in three time periods: aerial photographs from 1947 - 1955 and 1978 - 1988, Quick Bird and IKONOS satellite images from 2000 - 2007.At each point a 10 meter radius circle was visually evaluated in ARC-MAP for each time period for vegetation type, disturbance, presence of ice wedge polygon microtopography and surface water. A landscape change category was assigned to each point based on differences detected between the three periods. Change types were assigned for time interval 1, interval 2 and overall. Additional explanatory variables included elevation, slope, aspect, geology, physiography and temperature. Overall, 23% of points changed over the study period. Fire was the most common change agent, affecting 28% of the Boreal Forest points. The next most common change was degradation of soil ice wedges (thermokarst), detected at 12% of the points on the North Slope Tundra. The other most common changes included increase in cover of trees or shrubs (7% of Boreal Forest and Brooks Range points) and erosion or deposition on river floodplains and at the Beaufort Sea coast. Changes on the North Slope Tundra tended to be related to landscape wetting, mainly thermokarst. Changes in the Boreal Forest tended to involve landscape drying, including fire, reduced area of lakes and tree increase on wet sites. The second time interval coincided with a shift towards a warmer climate and had greater change in several categories including thermokarst, lake changes and tree and shrub increase.
NASA Astrophysics Data System (ADS)
Müller, M. S.; Urban, S.; Jutzi, B.
2017-08-01
The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.
NASA Astrophysics Data System (ADS)
Wierzbicki, Damian; Fryskowska, Anna; Kedzierski, Michal; Wojtkowska, Michalina; Delis, Paulina
2018-01-01
Unmanned aerial vehicles are suited to various photogrammetry and remote sensing missions. Such platforms are equipped with various optoelectronic sensors imaging in the visible and infrared spectral ranges and also thermal sensors. Nowadays, near-infrared (NIR) images acquired from low altitudes are often used for producing orthophoto maps for precision agriculture among other things. One major problem results from the application of low-cost custom and compact NIR cameras with wide-angle lenses introducing vignetting. In numerous cases, such cameras acquire low radiometric quality images depending on the lighting conditions. The paper presents a method of radiometric quality assessment of low-altitude NIR imagery data from a custom sensor. The method utilizes statistical analysis of NIR images. The data used for the analyses were acquired from various altitudes in various weather and lighting conditions. An objective NIR imagery quality index was determined as a result of the research. The results obtained using this index enabled the classification of images into three categories: good, medium, and low radiometric quality. The classification makes it possible to determine the a priori error of the acquired images and assess whether a rerun of the photogrammetric flight is necessary.
Sandino, Juan; Wooler, Adam; Gonzalez, Felipe
2017-09-24
The increased technological developments in Unmanned Aerial Vehicles (UAVs) combined with artificial intelligence and Machine Learning (ML) approaches have opened the possibility of remote sensing of extensive areas of arid lands. In this paper, a novel approach towards the detection of termite mounds with the use of a UAV, hyperspectral imagery, ML and digital image processing is intended. A new pipeline process is proposed to detect termite mounds automatically and to reduce, consequently, detection times. For the classification stage, several ML classification algorithms' outcomes were studied, selecting support vector machines as the best approach for their role in image classification of pre-existing termite mounds. Various test conditions were applied to the proposed algorithm, obtaining an overall accuracy of 68%. Images with satisfactory mound detection proved that the method is "resolution-dependent". These mounds were detected regardless of their rotation and position in the aerial image. However, image distortion reduced the number of detected mounds due to the inclusion of a shape analysis method in the object detection phase, and image resolution is still determinant to obtain accurate results. Hyperspectral imagery demonstrated better capabilities to classify a huge set of materials than implementing traditional segmentation methods on RGB images only.
RESTORATION OF ATMOSPHERICALLY DEGRADED IMAGES. VOLUME 3.
AERIAL CAMERAS, LASERS, ILLUMINATION, TRACKING CAMERAS, DIFFRACTION, PHOTOGRAPHIC GRAIN, DENSITY, DENSITOMETERS, MATHEMATICAL ANALYSIS, OPTICAL SCANNING, SYSTEMS ENGINEERING, TURBULENCE, OPTICAL PROPERTIES, SATELLITE TRACKING SYSTEMS.
Photocopy of recent aerial photograph (from U.S. Army Support Command ...
Photocopy of recent aerial photograph (from U.S. Army Support Command Hawaii, Wheeler Army Air Base, Hawaii) Photographer unknown, Circa 1990 AERIAL VIEW SHOWING MAIN SECTION OF BASE, BETWEEN KUNIA ROAD, WILIKINA DRIVE, AND McMAHON ROAD, AS WELL AS ADJACENT PINEAPPLE FIELDS, AND LAKE WILSON. - Schofield Barracks Military Reservation, Wilikina Drive & Kunia Road, Wahiawa, Honolulu County, HI
Online phase measuring profilometry for rectilinear moving object by image correction
NASA Astrophysics Data System (ADS)
Yuan, Han; Cao, Yi-Ping; Chen, Chen; Wang, Ya-Pin
2015-11-01
In phase measuring profilometry (PMP), the object must be static for point-to-point reconstruction with the captured deformed patterns. While the object is rectilinearly moving online, the size and pixel position differences of the object in different captured deformed patterns do not meet the point-to-point requirement. We propose an online PMP based on image correction to measure the three-dimensional shape of the rectilinear moving object. In the proposed method, the deformed patterns captured by a charge-coupled diode camera are reprojected from the oblique view to an aerial view first and then translated based on the feature points of the object. This method makes the object appear stationary in the deformed patterns. Experimental results show the feasibility and efficiency of the proposed method.
Black, Robert W.; Haggland, Alan; Crosby, Greg
2003-01-01
Instream hydraulic and riparian habitat conditions and stream temperatures were characterized for selected stream segments in the Upper White River Basin, Washington. An aerial multispectral imaging system used digital cameras to photograph the stream segments across multiple wavelengths to characterize fish habitat and temperature conditions. All imageries were georeferenced. Fish habitat features were photographed at a resolution of 0.5 meter and temperature imageries were photographed at a 1.0-meter resolution. The digital multispectral imageries were classified using commercially available software. Aerial photographs were taken on September 21, 1999. Field habitat data were collected from August 23 to October 12, 1999, to evaluate the measurement accuracy and effectiveness of the multispectral imaging in determining the extent of the instream habitat variables. Fish habitat types assessed by this method were the abundance of instream hydraulic features such as pool and riffle habitats, turbulent and non-turbulent habitats, riparian composition, the abundance of large woody debris in the stream and riparian zone, and stream temperatures. Factors such as the abundance of instream woody debris, the location and frequency of pools, and stream temperatures generally are known to have a significant impact on salmon. Instream woody debris creates the habitat complexity necessary to maintain a diverse and healthy salmon population. The abundance of pools is indicative of a stream's ability to support fish and other aquatic organisms. Changes in water temperature can affect aquatic organisms by altering metabolic rates and oxygen requirements, altering their sensitivity to toxic materials and affecting their ability to avoid predators. The specific objectives of this project were to evaluate the use of an aerial multispectral imaging system to accurately identify instream hydraulic features and surface-water temperatures in the Upper White River Basin, to use the multispectral system to help establish baseline instream/riparian habitat conditions in the study area, and to qualitatively assess the imaging system for possible use in other Puget Sound rivers. For the most part, all multispectral imagery-based estimates of total instream riffle and pool area were less than field measurements. The imagery-based estimates for riffle habitat area ranged from 35.5 to 83.3 percent less than field measurements. Pool habitat estimates ranged from 139.3 percent greater than field measurements to 94.0 percent less than field measurements. Multispectral imagery-based estimates of turbulent habitat conditions ranged from 9.3 percent greater than field measurements to 81.6 percent less than field measurements. Multispectral imagery-based estimates of non-turbulent habitat conditions ranged from 27.7 to 74.1 percent less than field measurements. The absolute average percentage of difference between field and imagery-based habitat type areas was less for the turbulent and non-turbulent habitat type categories than for pools and riffles. The estimate of woody debris by multispectral imaging was substantially different than field measurements; percentage of differences ranged from +373.1 to -100 percent. Although the total area of riffles, pools, and turbulent and non-turbulent habitat types measured in the field were all substantially higher than those estimated from the multispectral imagery, the percentage of composition of each habitat type was not substantially different between the imagery-based estimates and field measurements.
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-01-01
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-12-27
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.
3D Power Line Extraction from Multiple Aerial Images.
Oh, Jaehong; Lee, Changno
2017-09-29
Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters.
3D Power Line Extraction from Multiple Aerial Images
Lee, Changno
2017-01-01
Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters. PMID:28961204
NASA Astrophysics Data System (ADS)
Bratcher, Tim; Kroutil, Robert; Lanouette, André; Lewis, Paul E.; Miller, David; Shen, Sylvia; Thomas, Mark
2013-05-01
The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.
NASA Astrophysics Data System (ADS)
Marzolff, Irene
2014-05-01
One hundred years after the first publication on aerial photography taken from unmanned aerial platforms (Arthur Batut 1890), small-format aerial photography (SFAP) became a distinct niche within remote sensing during the 1990s. Geographers, plant biologists, archaeologists and other researchers with geospatial interests re-discovered the usefulness of unmanned platforms for taking high-resolution, low-altitude photographs that could then be digitized and analysed with geographical information systems, (softcopy) photogrammetry and image processing techniques originally developed for digital satellite imagery. Even before the ubiquity of digital consumer-grade cameras and 3D analysis software accessible to the photogrammetric layperson, do-it-yourself remote sensing using kites, blimps, drones and micro air vehicles literally enabled the questing researcher to get their own pictures of the world. As a flexible, cost-effective method, SFAP offered images with high spatial and temporal resolutions that could be ideally adapted to the scales of landscapes, forms and distribution patterns to be monitored. During the last five years, this development has been significantly accelerated by the rapid technological advancements of GPS navigation, autopiloting and revolutionary softcopy-photogrammetry techniques. State-of-the-art unmanned aerial systems (UAS) now allow automatic flight planning, autopilot-controlled aerial surveys, ground control-free direct georeferencing and DEM plus orthophoto generation with centimeter accuracy, all within the space of one day. The ease of use of current UAS and processing software for the generation of high-resolution topographic datasets and spectacular visualizations is tempting and has spurred the number of publications on these issues - but which advancements in our knowledge and understanding of geomorphological processes have we seen and can we expect in the future? This presentation traces the development of the last two decades by presenting and discussing examples for geomorphological research using UAS, mostly from the field of soil erosion monitoring.